repo_name
stringlengths 8
38
| pr_number
int64 3
47.1k
| pr_title
stringlengths 8
175
| pr_description
stringlengths 2
19.8k
⌀ | author
null | date_created
stringlengths 25
25
| date_merged
stringlengths 25
25
| filepath
stringlengths 6
136
| before_content
stringlengths 54
884k
⌀ | after_content
stringlengths 56
884k
| pr_author
stringlengths 3
21
| previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| comment
stringlengths 2
25.4k
| comment_author
stringlengths 3
29
| __index_level_0__
int64 0
5.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | Right... that was confusing for me, I don't like the fit_estimator flag as that seems to be a functionality of the estimator instead of this function. I'll change the input parameter to be an object so that we can keep that functionality for this PR and modify this method in future PRs to add the `fit()` call | andresmor-ms | 284 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | moved the code into causal_model (for compatibility and removed the param from this function :) | andresmor-ms | 285 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | I moved all the code to initialize the estimator into the causal_model (for compatibility) and made the function accept the object as parameter, so that it works the way you suggest | andresmor-ms | 286 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | removing all the kwargs will be done in future PRs, however i did add the object as parameter which makes things a little bit better while keeping backwards compatibility on causal_model class | andresmor-ms | 287 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | treatment, outcome need to be added to docstring.
identifier_name to be added to docstring.
| amit-sharma | 288 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | since we are now passing the method object, fit_estimator makes more sense for the next line, the estimate_effect method of the Estimator.
Here we can simply do the two if-else checks and then sequentially do "method.update_input". Just the "if fit_estimator" line can be removed. | amit-sharma | 289 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | data should be one of the arguments for estimate_effect. Actually it should be the first argument. Just like sklearn, the `estimate_effect` method of an Estimator object (or later, the fit method) should be passed the data. | amit-sharma | 290 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | let's follow the sklearn protocol and provide "data" to the estimate_effect method. For backwards compatibility, the data parameter can be optional. Inside the estimate_effect method, we can check that if data parameter is None, then use self.data | amit-sharma | 291 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | actually, to be fully consistent, let's provide X and y. Here y is the outcome variable, provided as `data[outcome_name]` and X would be data.drop(columns=["outcome_name"]) | amit-sharma | 292 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | Right now, this `estimate_effect` should also take fit_estimator param. But the ideal solution later is:
```
if fit_estimate: method.fit()
method.effect()
``` | amit-sharma | 293 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | what should be desired functionality for these parameters? test_significance, evaluate_effect_strength, confidence_intervals, effect_modifiers, etc.?
Ideally, these should be passed to the estimator. It makes sense if the user does not provide any `method`, so then we can add these parameters when initializing the Estimator class.
But if user has provided the estimator object, then it is tricky. One option is to make these fit-related params as optional params to the estimate_effect method of an estimator. The other is to simply update them as we do with `update_input` method.
My thinking is that test_signifinance, evaluate_effect_strength confidence_intervals do not belong in the constructor for an estimator. This would involve a refactor of all estimator classes, so it is a big step. What do you think? | amit-sharma | 294 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | I'll add this in a future PR when I start to refactor all the estimator objects | andresmor-ms | 295 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | Agree, I'll do this when I refactor all the estimator objects in a future PR, that's why I decided to leave it as it currently is | andresmor-ms | 296 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | I think we should move those params into the estimator object, so when initializing it the user can provide those, when I start refactoring the estimators we can find where's the best place for each parameter | andresmor-ms | 297 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_model.py | """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
if self._effect_modifiers is None or len(self._effect_modifiers) == 0:
effect_modifiers = self.get_effect_modifiers()
else:
effect_modifiers = self._effect_modifiers
if fit_estimator:
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if identified_estimand.no_directed_path:
self.logger.warning("No directed path from {0} to {1}.".format(self._treatment, self._outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
self.logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
else:
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
causal_estimator_class = self.causal_estimator.__class__
self.causal_estimator.update_input(treatment_value, control_value, target_units)
estimate = self.causal_estimator.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
return estimate_effect(
self._treatment,
self._outcome,
identified_estimand,
identifier_name,
causal_estimator,
control_value,
treatment_value,
test_significance,
evaluate_effect_strength,
confidence_intervals,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | there is a repetition of parameters, e.g., "test_signficance", "treatment_value" in both estimator and estimate_effect method. We need to decide where each parameter goes. I suggest that `target_units`, `control_value`, `treatment_value` belong to the `effect` method of an Estimator. One can call effect multiple times setting different values of these without changing the fitted estimator.
`` | amit-sharma | 298 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_model.py | """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
if self._effect_modifiers is None or len(self._effect_modifiers) == 0:
effect_modifiers = self.get_effect_modifiers()
else:
effect_modifiers = self._effect_modifiers
if fit_estimator:
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if identified_estimand.no_directed_path:
self.logger.warning("No directed path from {0} to {1}.".format(self._treatment, self._outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
self.logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
else:
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
causal_estimator_class = self.causal_estimator.__class__
self.causal_estimator.update_input(treatment_value, control_value, target_units)
estimate = self.causal_estimator.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
return estimate_effect(
self._treatment,
self._outcome,
identified_estimand,
identifier_name,
causal_estimator,
control_value,
treatment_value,
test_significance,
evaluate_effect_strength,
confidence_intervals,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | Got it, will take it into account for the PR where i refactor the estimator objects | andresmor-ms | 299 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_identifier/auto_identifier.py | import itertools
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import sympy as sp
import sympy.stats as spstats
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.efficient_backdoor import EfficientBackdoor
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class EstimandType(Enum):
# Average total effect
NONPARAMETRIC_ATE = "nonparametric-ate"
# Natural direct effect
NONPARAMETRIC_NDE = "nonparametric-nde"
# Natural indirect effect
NONPARAMETRIC_NIE = "nonparametric-nie"
# Controlled direct effect
NONPARAMETRIC_CDE = "nonparametric-cde"
class BackdoorAdjustment(Enum):
# Backdoor method names
BACKDOOR_DEFAULT = "default"
BACKDOOR_EXHAUSTIVE = "exhaustive-search"
BACKDOOR_MIN = "minimal-adjustment"
BACKDOOR_MAX = "maximal-adjustment"
BACKDOOR_EFFICIENT = "efficient-adjustment"
BACKDOOR_MIN_EFFICIENT = "efficient-minimal-adjustment"
BACKDOOR_MINCOST_EFFICIENT = "efficient-mincost-adjustment"
MAX_BACKDOOR_ITERATIONS = 100000
METHOD_NAMES = {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_EXHAUSTIVE,
BackdoorAdjustment.BACKDOOR_MIN,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
EFFICIENT_METHODS = {
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
DEFAULT_BACKDOOR_METHOD = BackdoorAdjustment.BACKDOOR_DEFAULT
class AutoIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call auto_identify_effect()
"""
def __init__(
self,
estimand_type: EstimandType,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
):
self.estimand_type = estimand_type
self.backdoor_adjustment = backdoor_adjustment
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.optimize_backdoor = optimize_backdoor
self.costs = costs
self.logger = logging.getLogger(__name__)
def identify_effect(
self,
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
conditional_node_names: List[str] = None,
**kwargs,
):
estimand = auto_identify_effect(
graph,
treatment_name,
outcome_name,
self.estimand_type,
conditional_node_names,
self.backdoor_adjustment,
self._proceed_when_unidentifiable,
self.optimize_backdoor,
self.costs,
**kwargs,
)
estimand.identifier = self
return estimand
def identify_backdoor(
self,
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
return identify_backdoor(
graph,
treatment_name,
outcome_name,
self.backdoor_adjustment,
include_unobserved,
dseparation_algo,
direct_effect,
)
def auto_identify_effect(
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
estimand_type: EstimandType,
conditional_node_names: List[str] = None,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
**kwargs,
):
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param optimize_backdoor: if True, uses an optimised algorithm to compute the backdoor sets
:param costs: non-negative costs associated with variables in the graph. Only used
for estimand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
no costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph.
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention is static.
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
treatment_name = parse_state(treatment_name)
outcome_name = parse_state(outcome_name)
# First, check if there is a directed path from action to outcome
if not graph.has_directed_path(treatment_name, outcome_name):
logger.warn("No directed path from treatment to outcome. Causal Effect is zero.")
return IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
no_directed_path=True,
)
if estimand_type == EstimandType.NONPARAMETRIC_ATE:
return identify_ate_effect(
graph,
treatment_name,
outcome_name,
backdoor_adjustment,
optimize_backdoor,
estimand_type,
costs,
conditional_node_names,
proceed_when_unidentifiable,
)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return identify_nde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_NIE:
return identify_nie_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_CDE:
return identify_cde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
else:
raise ValueError(
"Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
EstimandType.NONPARAMETRIC_ATE,
EstimandType.NONPARAMETRIC_CDE,
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
def identify_ate_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
optimize_backdoor: bool,
estimand_type: EstimandType,
costs: List,
conditional_node_names: List[str] = None,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# Pick algorithm to compute backdoor sets according to method chosen
if backdoor_adjustment not in EFFICIENT_METHODS:
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
else:
from dowhy.causal_identifier.backdoor import Backdoor
path = Backdoor(graph._graph, treatment_name, outcome_name)
backdoor_sets = path.get_backdoor_vars()
elif backdoor_adjustment in EFFICIENT_METHODS:
backdoor_sets = identify_efficient_backdoor(
graph, backdoor_adjustment, costs, conditional_node_names=conditional_node_names
)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = graph.get_instruments(treatment_name, outcome_name)
logger.info("Instrumental variables for treatment and outcome:" + str(instrument_names))
if len(instrument_names) > 0:
iv_estimand_expr = construct_iv_estimand(
treatment_name,
outcome_name,
instrument_names,
)
logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = identify_frontdoor(graph, treatment_name, outcome_name)
logger.info("Frontdoor variables for treatment and outcome:" + str(frontdoor_variables_names))
if len(frontdoor_variables_names) > 0:
frontdoor_estimand_expr = construct_frontdoor_estimand(
treatment_name,
outcome_name,
frontdoor_variables_names,
)
logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, frontdoor_variables_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, frontdoor_variables_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_cde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
"""Identify controlled direct effect. For a definition, see Vanderwheele (2011).
Controlled direct and mediated effects: definition, identification and bounds.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193506/
Using do-calculus rules, identification yields a adjustment set.
It is based on the principle that under a graph where the direct edge from treatment
to outcome is removed, conditioning on the adjustment set should d-separate
treatment and outcome.
"""
estimands_dict = {}
# Pick algorithm to compute backdoor sets according to method chosen
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment, direct_effect=True)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_nie_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {} # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_nde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_backdoor(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = graph.get_backdoor_paths(treatment_name, outcome_name)
elif dseparation_algo == "default":
bdoor_graph = graph.do_surgery(
treatment_name,
target_node_names=outcome_name,
remove_outgoing_edges=True,
remove_only_direct_edges_to_target=direct_effect,
)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
backdoor_adjustment = (
backdoor_adjustment if backdoor_adjustment != BackdoorAdjustment.BACKDOOR_DEFAULT else DEFAULT_BACKDOOR_METHOD
)
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
empty_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = (
graph.get_all_nodes(include_unobserved=include_unobserved) - set(treatment_name) - set(outcome_name)
)
if direct_effect:
# only remove descendants of Y
# also allow any causes of Y that are not caused by T (for lower variance)
eligible_variables -= graph.get_descendants(outcome_name)
else:
# remove descendants of T (mediators) and descendants of Y
eligible_variables -= graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = graph.check_dseparation(treatment_name, parse_state(var), set())
dsep_outcome_var = graph.check_dseparation(outcome_name, parse_state(var), set())
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if backdoor_adjustment in METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=backdoor_adjustment,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_DEFAULT and found_valid_adjustment_set:
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
else:
raise ValueError(
f"Identifier method {backdoor_adjustment} not supported. Try one of the following: {METHOD_NAMES}"
)
return backdoor_sets
def identify_efficient_backdoor(
graph: CausalGraph,
backdoor_adjustment: BackdoorAdjustment,
costs: List,
conditional_node_names: List[str] = None,
):
"""Method implementing algorithms to compute efficient backdoor sets, as
described in Rotnitzky and Smucler (2020), Smucler, Sapienza and Rotnitzky (2021)
and Smucler and Rotnitzky (2022).
For backdoor_adjustment='efficient-adjustment', computes an optimal backdoor set,
that is, a backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable backdoor sets. This optimal backdoor
set always exists when no variables are latent, and the algorithm is guaranteed to compute
it in this case. Under a non-parametric graphical model with latent variables,
such a backdoor set can fail to exist. When certain sufficient conditions under which it is
known that such a backdoor set exists are not satisfied, an error is raised.
For backdoor_adjustment='efficient-minimal-adjustment', computes an optimal minimal backdoor set,
that is, a minimal backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimal backdoor sets.
For backdoor_adjustment='efficient-mincost-adjustment', computes an optimal minimum cost backdoor set,
that is, a minimum cost backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimum cost backdoor sets. The cost
of a backdoor set is defined as the sum of the costs of the variables that comprise it.
The various optimal backdoor sets computed by this method are not only optimal under
non-parametric graphical models and non-parametric estimators of interventional mean,
but also under linear graphical models and OLS estimators, per results in Henckel, Perkovic
and Maathuis (2020).
:param costs: a list with non-negative costs associated with variables in the graph. Only used
for estimatand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
not costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph. The structure of the list should
be of the form [(node, {"cost": x}) for node in nodes].
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention sets the treatment to a constant.
:returns: backdoor_sets, a list of dictionaries, with each dictionary
having as values a backdoor set.
"""
if costs is None and backdoor_adjustment == "efficient-mincost-adjustment":
logger.warning("No costs were passed, so they will be assumed to be constant and equal to 1.")
efficient_bd = EfficientBackdoor(
graph=graph,
conditional_node_names=conditional_node_names,
costs=costs,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EFFICIENT:
backdoor_set = efficient_bd.optimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT:
backdoor_set = efficient_bd.optimal_minimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT:
backdoor_set = efficient_bd.optimal_mincost_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
return backdoor_sets
def find_valid_adjustment_sets(
graph: CausalGraph,
treatment_name: List,
outcome_name: List,
backdoor_paths: List,
bdoor_graph: CausalGraph,
dseparation_algo: str,
backdoor_sets: List,
filt_eligible_variables: List,
backdoor_adjustment: BackdoorAdjustment,
max_iterations: int,
):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = graph.all_observed(graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = (
range(1, len(filt_eligible_variables) + 1, 1)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN
else range(len(filt_eligible_variables), 0, -1)
)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(filt_eligible_variables, size_candidate_set):
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
candidate_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
logger.debug(
"Candidate backdoor set: {0}, is_dseparated: {1}".format(candidate_set, check["is_dseparated"])
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EXHAUSTIVE and num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached.")
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if (
backdoor_adjustment
in {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_MIN,
}
and found_valid_adjustment_set
):
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if (
backdoor_adjustment in {BackdoorAdjustment.BACKDOOR_DEFAULT, BackdoorAdjustment.BACKDOOR_MAX}
and all_nodes_observed
):
break
if num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set.")
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], backdoor_sets_dict: Dict
):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(graph.get_instruments(treatment_name, outcome_name))
iv_count_dict = {
key: len(set(bdoor_set).intersection(instrument_names)) for key, bdoor_set in backdoor_sets_dict.items()
}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count}
min_iv_backdoor_sets_dict = {key: backdoor_sets_dict[key] for key in min_iv_keys}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
backdoor_sets: List[str],
estimands_dict: Dict,
):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed."""
backdoor_variables_dict = {}
is_identified = [graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets]
if any(is_identified):
logger.info("Causal effect can be identified.")
backdoor_sets_arr = [
list(bset["backdoor_set"]) for bset in backdoor_sets if graph.all_observed(bset["backdoor_set"])
]
else: # there is unobserved confounding
logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = construct_backdoor_estimand(treatment_name, outcome_name, backdoor_sets_arr[i])
logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor" + str(i + 1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor" + str(i + 1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], dseparation_algo: str = "default"
):
"""Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = graph.do_surgery(treatment_name, remove_incoming_edges=True)
bdoor_graph1 = graph.do_surgery(treatment_name, remove_outgoing_edges=True)
elif dseparation_algo == "naive":
frontdoor_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
eligible_variables = (
graph.get_descendants(treatment_name) - set(outcome_name) - set(graph.get_descendants(outcome_name))
)
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = graph.check_valid_frontdoor_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo,
)
logger.debug("Candidate frontdoor set: {0}, is_dseparated: {1}".format(candidate_var, cond1))
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = graph.check_valid_backdoor_set(
treatment_name,
parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph=bdoor_graph1,
dseparation_algo=dseparation_algo,
)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = graph.do_surgery(candidate_var, remove_outgoing_edges=True)
cond3 = graph.check_valid_backdoor_set(
parse_state(candidate_var),
outcome_name,
treatment_name,
backdoor_paths=None,
new_graph=bdoor_graph2,
dseparation_algo=dseparation_algo,
)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(graph: CausalGraph, treatment_name: List[str], outcome_name: List[str]):
"""Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
eligible_variables = graph.get_descendants(treatment_name) - set(outcome_name)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = graph.check_valid_mediation_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
mediation_paths=mediation_paths,
)
logger.debug("Candidate mediation set: {0}, on_mediating_path: {1}".format(candidate_var, is_valid_mediation))
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
def identify_mediation_first_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
mediators_names: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, treatment_name, mediators_names, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
mediators_names: List[str],
outcome_name: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, mediators_names, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def construct_backdoor_estimand(treatment_name: List[str], outcome_name: List[str], common_causes: List[str]):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes) > 0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
"Unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_iv_estimand(treatment_name: List[str], outcome_name: List[str], instrument_names: List[str]):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative / sym_treatment_derivative)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then " "\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(",".join(instrument_names), ",".join(treatment_name), outcome_name),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_frontdoor_estimand(
treatment_name: List[str], outcome_name: List[str], frontdoor_variables_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
sym_assumptions = {
"Full-mediation": ("{2} intercepts (blocks) all directed paths from {0} to {1}.").format(
",".join(treatment_name),
",".join(outcome_name),
",".join(frontdoor_variables_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name),
outcome_name,
",".join(frontdoor_variables_names),
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_mediation_estimand(
estimand_type: EstimandType, treatment_name: List[str], outcome_name: List[str], mediators_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names) > 0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(sym_conditional_outcome, sym_treatment)
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(mediators_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(mediators_names)),
}
else:
raise ValueError(
"Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
| import itertools
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import sympy as sp
import sympy.stats as spstats
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.efficient_backdoor import EfficientBackdoor
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class EstimandType(Enum):
# Average total effect
NONPARAMETRIC_ATE = "nonparametric-ate"
# Natural direct effect
NONPARAMETRIC_NDE = "nonparametric-nde"
# Natural indirect effect
NONPARAMETRIC_NIE = "nonparametric-nie"
# Controlled direct effect
NONPARAMETRIC_CDE = "nonparametric-cde"
class BackdoorAdjustment(Enum):
# Backdoor method names
BACKDOOR_DEFAULT = "default"
BACKDOOR_EXHAUSTIVE = "exhaustive-search"
BACKDOOR_MIN = "minimal-adjustment"
BACKDOOR_MAX = "maximal-adjustment"
BACKDOOR_EFFICIENT = "efficient-adjustment"
BACKDOOR_MIN_EFFICIENT = "efficient-minimal-adjustment"
BACKDOOR_MINCOST_EFFICIENT = "efficient-mincost-adjustment"
MAX_BACKDOOR_ITERATIONS = 100000
METHOD_NAMES = {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_EXHAUSTIVE,
BackdoorAdjustment.BACKDOOR_MIN,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
EFFICIENT_METHODS = {
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
DEFAULT_BACKDOOR_METHOD = BackdoorAdjustment.BACKDOOR_DEFAULT
class AutoIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call auto_identify_effect()
"""
def __init__(
self,
estimand_type: EstimandType,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
):
self.estimand_type = estimand_type
self.backdoor_adjustment = backdoor_adjustment
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.optimize_backdoor = optimize_backdoor
self.costs = costs
self.logger = logging.getLogger(__name__)
def identify_effect(
self,
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
conditional_node_names: List[str] = None,
**kwargs,
):
estimand = identify_effect_auto(
graph,
treatment_name,
outcome_name,
self.estimand_type,
conditional_node_names,
self.backdoor_adjustment,
self._proceed_when_unidentifiable,
self.optimize_backdoor,
self.costs,
**kwargs,
)
estimand.identifier = self
return estimand
def identify_backdoor(
self,
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
return identify_backdoor(
graph,
treatment_name,
outcome_name,
self.backdoor_adjustment,
include_unobserved,
dseparation_algo,
direct_effect,
)
def identify_effect_auto(
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
estimand_type: EstimandType,
conditional_node_names: List[str] = None,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
**kwargs,
) -> IdentifiedEstimand:
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param optimize_backdoor: if True, uses an optimised algorithm to compute the backdoor sets
:param costs: non-negative costs associated with variables in the graph. Only used
for estimand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
no costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph.
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention is static.
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
treatment_name = parse_state(treatment_name)
outcome_name = parse_state(outcome_name)
# First, check if there is a directed path from action to outcome
if not graph.has_directed_path(treatment_name, outcome_name):
logger.warn("No directed path from treatment to outcome. Causal Effect is zero.")
return IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
no_directed_path=True,
)
if estimand_type == EstimandType.NONPARAMETRIC_ATE:
return identify_ate_effect(
graph,
treatment_name,
outcome_name,
backdoor_adjustment,
optimize_backdoor,
estimand_type,
costs,
conditional_node_names,
proceed_when_unidentifiable,
)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return identify_nde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_NIE:
return identify_nie_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_CDE:
return identify_cde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
else:
raise ValueError(
"Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
EstimandType.NONPARAMETRIC_ATE,
EstimandType.NONPARAMETRIC_CDE,
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
def identify_ate_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
optimize_backdoor: bool,
estimand_type: EstimandType,
costs: List,
conditional_node_names: List[str] = None,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# Pick algorithm to compute backdoor sets according to method chosen
if backdoor_adjustment not in EFFICIENT_METHODS:
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
else:
from dowhy.causal_identifier.backdoor import Backdoor
path = Backdoor(graph._graph, treatment_name, outcome_name)
backdoor_sets = path.get_backdoor_vars()
elif backdoor_adjustment in EFFICIENT_METHODS:
backdoor_sets = identify_efficient_backdoor(
graph, backdoor_adjustment, costs, conditional_node_names=conditional_node_names
)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = graph.get_instruments(treatment_name, outcome_name)
logger.info("Instrumental variables for treatment and outcome:" + str(instrument_names))
if len(instrument_names) > 0:
iv_estimand_expr = construct_iv_estimand(
treatment_name,
outcome_name,
instrument_names,
)
logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = identify_frontdoor(graph, treatment_name, outcome_name)
logger.info("Frontdoor variables for treatment and outcome:" + str(frontdoor_variables_names))
if len(frontdoor_variables_names) > 0:
frontdoor_estimand_expr = construct_frontdoor_estimand(
treatment_name,
outcome_name,
frontdoor_variables_names,
)
logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, frontdoor_variables_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, frontdoor_variables_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_cde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
"""Identify controlled direct effect. For a definition, see Vanderwheele (2011).
Controlled direct and mediated effects: definition, identification and bounds.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193506/
Using do-calculus rules, identification yields a adjustment set.
It is based on the principle that under a graph where the direct edge from treatment
to outcome is removed, conditioning on the adjustment set should d-separate
treatment and outcome.
"""
estimands_dict = {}
# Pick algorithm to compute backdoor sets according to method chosen
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment, direct_effect=True)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_nie_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {} # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_nde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_backdoor(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = graph.get_backdoor_paths(treatment_name, outcome_name)
elif dseparation_algo == "default":
bdoor_graph = graph.do_surgery(
treatment_name,
target_node_names=outcome_name,
remove_outgoing_edges=True,
remove_only_direct_edges_to_target=direct_effect,
)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
backdoor_adjustment = (
backdoor_adjustment if backdoor_adjustment != BackdoorAdjustment.BACKDOOR_DEFAULT else DEFAULT_BACKDOOR_METHOD
)
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
empty_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = (
graph.get_all_nodes(include_unobserved=include_unobserved) - set(treatment_name) - set(outcome_name)
)
if direct_effect:
# only remove descendants of Y
# also allow any causes of Y that are not caused by T (for lower variance)
eligible_variables -= graph.get_descendants(outcome_name)
else:
# remove descendants of T (mediators) and descendants of Y
eligible_variables -= graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = graph.check_dseparation(treatment_name, parse_state(var), set())
dsep_outcome_var = graph.check_dseparation(outcome_name, parse_state(var), set())
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if backdoor_adjustment in METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=backdoor_adjustment,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_DEFAULT and found_valid_adjustment_set:
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
else:
raise ValueError(
f"Identifier method {backdoor_adjustment} not supported. Try one of the following: {METHOD_NAMES}"
)
return backdoor_sets
def identify_efficient_backdoor(
graph: CausalGraph,
backdoor_adjustment: BackdoorAdjustment,
costs: List,
conditional_node_names: List[str] = None,
):
"""Method implementing algorithms to compute efficient backdoor sets, as
described in Rotnitzky and Smucler (2020), Smucler, Sapienza and Rotnitzky (2021)
and Smucler and Rotnitzky (2022).
For backdoor_adjustment='efficient-adjustment', computes an optimal backdoor set,
that is, a backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable backdoor sets. This optimal backdoor
set always exists when no variables are latent, and the algorithm is guaranteed to compute
it in this case. Under a non-parametric graphical model with latent variables,
such a backdoor set can fail to exist. When certain sufficient conditions under which it is
known that such a backdoor set exists are not satisfied, an error is raised.
For backdoor_adjustment='efficient-minimal-adjustment', computes an optimal minimal backdoor set,
that is, a minimal backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimal backdoor sets.
For backdoor_adjustment='efficient-mincost-adjustment', computes an optimal minimum cost backdoor set,
that is, a minimum cost backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimum cost backdoor sets. The cost
of a backdoor set is defined as the sum of the costs of the variables that comprise it.
The various optimal backdoor sets computed by this method are not only optimal under
non-parametric graphical models and non-parametric estimators of interventional mean,
but also under linear graphical models and OLS estimators, per results in Henckel, Perkovic
and Maathuis (2020).
:param costs: a list with non-negative costs associated with variables in the graph. Only used
for estimatand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
not costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph. The structure of the list should
be of the form [(node, {"cost": x}) for node in nodes].
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention sets the treatment to a constant.
:returns: backdoor_sets, a list of dictionaries, with each dictionary
having as values a backdoor set.
"""
if costs is None and backdoor_adjustment == "efficient-mincost-adjustment":
logger.warning("No costs were passed, so they will be assumed to be constant and equal to 1.")
efficient_bd = EfficientBackdoor(
graph=graph,
conditional_node_names=conditional_node_names,
costs=costs,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EFFICIENT:
backdoor_set = efficient_bd.optimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT:
backdoor_set = efficient_bd.optimal_minimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT:
backdoor_set = efficient_bd.optimal_mincost_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
return backdoor_sets
def find_valid_adjustment_sets(
graph: CausalGraph,
treatment_name: List,
outcome_name: List,
backdoor_paths: List,
bdoor_graph: CausalGraph,
dseparation_algo: str,
backdoor_sets: List,
filt_eligible_variables: List,
backdoor_adjustment: BackdoorAdjustment,
max_iterations: int,
):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = graph.all_observed(graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = (
range(1, len(filt_eligible_variables) + 1, 1)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN
else range(len(filt_eligible_variables), 0, -1)
)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(filt_eligible_variables, size_candidate_set):
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
candidate_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
logger.debug(
"Candidate backdoor set: {0}, is_dseparated: {1}".format(candidate_set, check["is_dseparated"])
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EXHAUSTIVE and num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached.")
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if (
backdoor_adjustment
in {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_MIN,
}
and found_valid_adjustment_set
):
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if (
backdoor_adjustment in {BackdoorAdjustment.BACKDOOR_DEFAULT, BackdoorAdjustment.BACKDOOR_MAX}
and all_nodes_observed
):
break
if num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set.")
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], backdoor_sets_dict: Dict
):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(graph.get_instruments(treatment_name, outcome_name))
iv_count_dict = {
key: len(set(bdoor_set).intersection(instrument_names)) for key, bdoor_set in backdoor_sets_dict.items()
}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count}
min_iv_backdoor_sets_dict = {key: backdoor_sets_dict[key] for key in min_iv_keys}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
backdoor_sets: List[str],
estimands_dict: Dict,
):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed."""
backdoor_variables_dict = {}
is_identified = [graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets]
if any(is_identified):
logger.info("Causal effect can be identified.")
backdoor_sets_arr = [
list(bset["backdoor_set"]) for bset in backdoor_sets if graph.all_observed(bset["backdoor_set"])
]
else: # there is unobserved confounding
logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = construct_backdoor_estimand(treatment_name, outcome_name, backdoor_sets_arr[i])
logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor" + str(i + 1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor" + str(i + 1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], dseparation_algo: str = "default"
):
"""Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = graph.do_surgery(treatment_name, remove_incoming_edges=True)
bdoor_graph1 = graph.do_surgery(treatment_name, remove_outgoing_edges=True)
elif dseparation_algo == "naive":
frontdoor_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
eligible_variables = (
graph.get_descendants(treatment_name) - set(outcome_name) - set(graph.get_descendants(outcome_name))
)
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = graph.check_valid_frontdoor_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo,
)
logger.debug("Candidate frontdoor set: {0}, is_dseparated: {1}".format(candidate_var, cond1))
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = graph.check_valid_backdoor_set(
treatment_name,
parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph=bdoor_graph1,
dseparation_algo=dseparation_algo,
)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = graph.do_surgery(candidate_var, remove_outgoing_edges=True)
cond3 = graph.check_valid_backdoor_set(
parse_state(candidate_var),
outcome_name,
treatment_name,
backdoor_paths=None,
new_graph=bdoor_graph2,
dseparation_algo=dseparation_algo,
)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(graph: CausalGraph, treatment_name: List[str], outcome_name: List[str]):
"""Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
eligible_variables = graph.get_descendants(treatment_name) - set(outcome_name)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = graph.check_valid_mediation_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
mediation_paths=mediation_paths,
)
logger.debug("Candidate mediation set: {0}, on_mediating_path: {1}".format(candidate_var, is_valid_mediation))
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
def identify_mediation_first_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
mediators_names: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, treatment_name, mediators_names, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
mediators_names: List[str],
outcome_name: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, mediators_names, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def construct_backdoor_estimand(treatment_name: List[str], outcome_name: List[str], common_causes: List[str]):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes) > 0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
"Unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_iv_estimand(treatment_name: List[str], outcome_name: List[str], instrument_names: List[str]):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative / sym_treatment_derivative)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then " "\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(",".join(instrument_names), ",".join(treatment_name), outcome_name),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_frontdoor_estimand(
treatment_name: List[str], outcome_name: List[str], frontdoor_variables_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
sym_assumptions = {
"Full-mediation": ("{2} intercepts (blocks) all directed paths from {0} to {1}.").format(
",".join(treatment_name),
",".join(outcome_name),
",".join(frontdoor_variables_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name),
outcome_name,
",".join(frontdoor_variables_names),
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_mediation_estimand(
estimand_type: EstimandType, treatment_name: List[str], outcome_name: List[str], mediators_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names) > 0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(sym_conditional_outcome, sym_treatment)
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(mediators_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(mediators_names)),
}
else:
raise ValueError(
"Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | does it return None otherwise? If yes, good to add to the docstring on line 141 | amit-sharma | 300 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_identifier/auto_identifier.py | import itertools
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import sympy as sp
import sympy.stats as spstats
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.efficient_backdoor import EfficientBackdoor
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class EstimandType(Enum):
# Average total effect
NONPARAMETRIC_ATE = "nonparametric-ate"
# Natural direct effect
NONPARAMETRIC_NDE = "nonparametric-nde"
# Natural indirect effect
NONPARAMETRIC_NIE = "nonparametric-nie"
# Controlled direct effect
NONPARAMETRIC_CDE = "nonparametric-cde"
class BackdoorAdjustment(Enum):
# Backdoor method names
BACKDOOR_DEFAULT = "default"
BACKDOOR_EXHAUSTIVE = "exhaustive-search"
BACKDOOR_MIN = "minimal-adjustment"
BACKDOOR_MAX = "maximal-adjustment"
BACKDOOR_EFFICIENT = "efficient-adjustment"
BACKDOOR_MIN_EFFICIENT = "efficient-minimal-adjustment"
BACKDOOR_MINCOST_EFFICIENT = "efficient-mincost-adjustment"
MAX_BACKDOOR_ITERATIONS = 100000
METHOD_NAMES = {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_EXHAUSTIVE,
BackdoorAdjustment.BACKDOOR_MIN,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
EFFICIENT_METHODS = {
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
DEFAULT_BACKDOOR_METHOD = BackdoorAdjustment.BACKDOOR_DEFAULT
class AutoIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call auto_identify_effect()
"""
def __init__(
self,
estimand_type: EstimandType,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
):
self.estimand_type = estimand_type
self.backdoor_adjustment = backdoor_adjustment
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.optimize_backdoor = optimize_backdoor
self.costs = costs
self.logger = logging.getLogger(__name__)
def identify_effect(
self,
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
conditional_node_names: List[str] = None,
**kwargs,
):
estimand = auto_identify_effect(
graph,
treatment_name,
outcome_name,
self.estimand_type,
conditional_node_names,
self.backdoor_adjustment,
self._proceed_when_unidentifiable,
self.optimize_backdoor,
self.costs,
**kwargs,
)
estimand.identifier = self
return estimand
def identify_backdoor(
self,
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
return identify_backdoor(
graph,
treatment_name,
outcome_name,
self.backdoor_adjustment,
include_unobserved,
dseparation_algo,
direct_effect,
)
def auto_identify_effect(
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
estimand_type: EstimandType,
conditional_node_names: List[str] = None,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
**kwargs,
):
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param optimize_backdoor: if True, uses an optimised algorithm to compute the backdoor sets
:param costs: non-negative costs associated with variables in the graph. Only used
for estimand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
no costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph.
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention is static.
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
treatment_name = parse_state(treatment_name)
outcome_name = parse_state(outcome_name)
# First, check if there is a directed path from action to outcome
if not graph.has_directed_path(treatment_name, outcome_name):
logger.warn("No directed path from treatment to outcome. Causal Effect is zero.")
return IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
no_directed_path=True,
)
if estimand_type == EstimandType.NONPARAMETRIC_ATE:
return identify_ate_effect(
graph,
treatment_name,
outcome_name,
backdoor_adjustment,
optimize_backdoor,
estimand_type,
costs,
conditional_node_names,
proceed_when_unidentifiable,
)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return identify_nde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_NIE:
return identify_nie_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_CDE:
return identify_cde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
else:
raise ValueError(
"Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
EstimandType.NONPARAMETRIC_ATE,
EstimandType.NONPARAMETRIC_CDE,
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
def identify_ate_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
optimize_backdoor: bool,
estimand_type: EstimandType,
costs: List,
conditional_node_names: List[str] = None,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# Pick algorithm to compute backdoor sets according to method chosen
if backdoor_adjustment not in EFFICIENT_METHODS:
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
else:
from dowhy.causal_identifier.backdoor import Backdoor
path = Backdoor(graph._graph, treatment_name, outcome_name)
backdoor_sets = path.get_backdoor_vars()
elif backdoor_adjustment in EFFICIENT_METHODS:
backdoor_sets = identify_efficient_backdoor(
graph, backdoor_adjustment, costs, conditional_node_names=conditional_node_names
)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = graph.get_instruments(treatment_name, outcome_name)
logger.info("Instrumental variables for treatment and outcome:" + str(instrument_names))
if len(instrument_names) > 0:
iv_estimand_expr = construct_iv_estimand(
treatment_name,
outcome_name,
instrument_names,
)
logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = identify_frontdoor(graph, treatment_name, outcome_name)
logger.info("Frontdoor variables for treatment and outcome:" + str(frontdoor_variables_names))
if len(frontdoor_variables_names) > 0:
frontdoor_estimand_expr = construct_frontdoor_estimand(
treatment_name,
outcome_name,
frontdoor_variables_names,
)
logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, frontdoor_variables_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, frontdoor_variables_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_cde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
"""Identify controlled direct effect. For a definition, see Vanderwheele (2011).
Controlled direct and mediated effects: definition, identification and bounds.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193506/
Using do-calculus rules, identification yields a adjustment set.
It is based on the principle that under a graph where the direct edge from treatment
to outcome is removed, conditioning on the adjustment set should d-separate
treatment and outcome.
"""
estimands_dict = {}
# Pick algorithm to compute backdoor sets according to method chosen
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment, direct_effect=True)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_nie_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {} # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_nde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_backdoor(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = graph.get_backdoor_paths(treatment_name, outcome_name)
elif dseparation_algo == "default":
bdoor_graph = graph.do_surgery(
treatment_name,
target_node_names=outcome_name,
remove_outgoing_edges=True,
remove_only_direct_edges_to_target=direct_effect,
)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
backdoor_adjustment = (
backdoor_adjustment if backdoor_adjustment != BackdoorAdjustment.BACKDOOR_DEFAULT else DEFAULT_BACKDOOR_METHOD
)
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
empty_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = (
graph.get_all_nodes(include_unobserved=include_unobserved) - set(treatment_name) - set(outcome_name)
)
if direct_effect:
# only remove descendants of Y
# also allow any causes of Y that are not caused by T (for lower variance)
eligible_variables -= graph.get_descendants(outcome_name)
else:
# remove descendants of T (mediators) and descendants of Y
eligible_variables -= graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = graph.check_dseparation(treatment_name, parse_state(var), set())
dsep_outcome_var = graph.check_dseparation(outcome_name, parse_state(var), set())
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if backdoor_adjustment in METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=backdoor_adjustment,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_DEFAULT and found_valid_adjustment_set:
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
else:
raise ValueError(
f"Identifier method {backdoor_adjustment} not supported. Try one of the following: {METHOD_NAMES}"
)
return backdoor_sets
def identify_efficient_backdoor(
graph: CausalGraph,
backdoor_adjustment: BackdoorAdjustment,
costs: List,
conditional_node_names: List[str] = None,
):
"""Method implementing algorithms to compute efficient backdoor sets, as
described in Rotnitzky and Smucler (2020), Smucler, Sapienza and Rotnitzky (2021)
and Smucler and Rotnitzky (2022).
For backdoor_adjustment='efficient-adjustment', computes an optimal backdoor set,
that is, a backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable backdoor sets. This optimal backdoor
set always exists when no variables are latent, and the algorithm is guaranteed to compute
it in this case. Under a non-parametric graphical model with latent variables,
such a backdoor set can fail to exist. When certain sufficient conditions under which it is
known that such a backdoor set exists are not satisfied, an error is raised.
For backdoor_adjustment='efficient-minimal-adjustment', computes an optimal minimal backdoor set,
that is, a minimal backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimal backdoor sets.
For backdoor_adjustment='efficient-mincost-adjustment', computes an optimal minimum cost backdoor set,
that is, a minimum cost backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimum cost backdoor sets. The cost
of a backdoor set is defined as the sum of the costs of the variables that comprise it.
The various optimal backdoor sets computed by this method are not only optimal under
non-parametric graphical models and non-parametric estimators of interventional mean,
but also under linear graphical models and OLS estimators, per results in Henckel, Perkovic
and Maathuis (2020).
:param costs: a list with non-negative costs associated with variables in the graph. Only used
for estimatand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
not costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph. The structure of the list should
be of the form [(node, {"cost": x}) for node in nodes].
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention sets the treatment to a constant.
:returns: backdoor_sets, a list of dictionaries, with each dictionary
having as values a backdoor set.
"""
if costs is None and backdoor_adjustment == "efficient-mincost-adjustment":
logger.warning("No costs were passed, so they will be assumed to be constant and equal to 1.")
efficient_bd = EfficientBackdoor(
graph=graph,
conditional_node_names=conditional_node_names,
costs=costs,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EFFICIENT:
backdoor_set = efficient_bd.optimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT:
backdoor_set = efficient_bd.optimal_minimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT:
backdoor_set = efficient_bd.optimal_mincost_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
return backdoor_sets
def find_valid_adjustment_sets(
graph: CausalGraph,
treatment_name: List,
outcome_name: List,
backdoor_paths: List,
bdoor_graph: CausalGraph,
dseparation_algo: str,
backdoor_sets: List,
filt_eligible_variables: List,
backdoor_adjustment: BackdoorAdjustment,
max_iterations: int,
):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = graph.all_observed(graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = (
range(1, len(filt_eligible_variables) + 1, 1)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN
else range(len(filt_eligible_variables), 0, -1)
)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(filt_eligible_variables, size_candidate_set):
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
candidate_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
logger.debug(
"Candidate backdoor set: {0}, is_dseparated: {1}".format(candidate_set, check["is_dseparated"])
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EXHAUSTIVE and num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached.")
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if (
backdoor_adjustment
in {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_MIN,
}
and found_valid_adjustment_set
):
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if (
backdoor_adjustment in {BackdoorAdjustment.BACKDOOR_DEFAULT, BackdoorAdjustment.BACKDOOR_MAX}
and all_nodes_observed
):
break
if num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set.")
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], backdoor_sets_dict: Dict
):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(graph.get_instruments(treatment_name, outcome_name))
iv_count_dict = {
key: len(set(bdoor_set).intersection(instrument_names)) for key, bdoor_set in backdoor_sets_dict.items()
}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count}
min_iv_backdoor_sets_dict = {key: backdoor_sets_dict[key] for key in min_iv_keys}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
backdoor_sets: List[str],
estimands_dict: Dict,
):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed."""
backdoor_variables_dict = {}
is_identified = [graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets]
if any(is_identified):
logger.info("Causal effect can be identified.")
backdoor_sets_arr = [
list(bset["backdoor_set"]) for bset in backdoor_sets if graph.all_observed(bset["backdoor_set"])
]
else: # there is unobserved confounding
logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = construct_backdoor_estimand(treatment_name, outcome_name, backdoor_sets_arr[i])
logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor" + str(i + 1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor" + str(i + 1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], dseparation_algo: str = "default"
):
"""Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = graph.do_surgery(treatment_name, remove_incoming_edges=True)
bdoor_graph1 = graph.do_surgery(treatment_name, remove_outgoing_edges=True)
elif dseparation_algo == "naive":
frontdoor_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
eligible_variables = (
graph.get_descendants(treatment_name) - set(outcome_name) - set(graph.get_descendants(outcome_name))
)
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = graph.check_valid_frontdoor_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo,
)
logger.debug("Candidate frontdoor set: {0}, is_dseparated: {1}".format(candidate_var, cond1))
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = graph.check_valid_backdoor_set(
treatment_name,
parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph=bdoor_graph1,
dseparation_algo=dseparation_algo,
)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = graph.do_surgery(candidate_var, remove_outgoing_edges=True)
cond3 = graph.check_valid_backdoor_set(
parse_state(candidate_var),
outcome_name,
treatment_name,
backdoor_paths=None,
new_graph=bdoor_graph2,
dseparation_algo=dseparation_algo,
)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(graph: CausalGraph, treatment_name: List[str], outcome_name: List[str]):
"""Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
eligible_variables = graph.get_descendants(treatment_name) - set(outcome_name)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = graph.check_valid_mediation_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
mediation_paths=mediation_paths,
)
logger.debug("Candidate mediation set: {0}, on_mediating_path: {1}".format(candidate_var, is_valid_mediation))
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
def identify_mediation_first_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
mediators_names: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, treatment_name, mediators_names, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
mediators_names: List[str],
outcome_name: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, mediators_names, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def construct_backdoor_estimand(treatment_name: List[str], outcome_name: List[str], common_causes: List[str]):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes) > 0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
"Unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_iv_estimand(treatment_name: List[str], outcome_name: List[str], instrument_names: List[str]):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative / sym_treatment_derivative)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then " "\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(",".join(instrument_names), ",".join(treatment_name), outcome_name),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_frontdoor_estimand(
treatment_name: List[str], outcome_name: List[str], frontdoor_variables_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
sym_assumptions = {
"Full-mediation": ("{2} intercepts (blocks) all directed paths from {0} to {1}.").format(
",".join(treatment_name),
",".join(outcome_name),
",".join(frontdoor_variables_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name),
outcome_name,
",".join(frontdoor_variables_names),
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_mediation_estimand(
estimand_type: EstimandType, treatment_name: List[str], outcome_name: List[str], mediators_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names) > 0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(sym_conditional_outcome, sym_treatment)
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(mediators_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(mediators_names)),
}
else:
raise ValueError(
"Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
| import itertools
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import sympy as sp
import sympy.stats as spstats
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.efficient_backdoor import EfficientBackdoor
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class EstimandType(Enum):
# Average total effect
NONPARAMETRIC_ATE = "nonparametric-ate"
# Natural direct effect
NONPARAMETRIC_NDE = "nonparametric-nde"
# Natural indirect effect
NONPARAMETRIC_NIE = "nonparametric-nie"
# Controlled direct effect
NONPARAMETRIC_CDE = "nonparametric-cde"
class BackdoorAdjustment(Enum):
# Backdoor method names
BACKDOOR_DEFAULT = "default"
BACKDOOR_EXHAUSTIVE = "exhaustive-search"
BACKDOOR_MIN = "minimal-adjustment"
BACKDOOR_MAX = "maximal-adjustment"
BACKDOOR_EFFICIENT = "efficient-adjustment"
BACKDOOR_MIN_EFFICIENT = "efficient-minimal-adjustment"
BACKDOOR_MINCOST_EFFICIENT = "efficient-mincost-adjustment"
MAX_BACKDOOR_ITERATIONS = 100000
METHOD_NAMES = {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_EXHAUSTIVE,
BackdoorAdjustment.BACKDOOR_MIN,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
EFFICIENT_METHODS = {
BackdoorAdjustment.BACKDOOR_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,
BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,
}
DEFAULT_BACKDOOR_METHOD = BackdoorAdjustment.BACKDOOR_DEFAULT
class AutoIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call auto_identify_effect()
"""
def __init__(
self,
estimand_type: EstimandType,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
):
self.estimand_type = estimand_type
self.backdoor_adjustment = backdoor_adjustment
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.optimize_backdoor = optimize_backdoor
self.costs = costs
self.logger = logging.getLogger(__name__)
def identify_effect(
self,
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
conditional_node_names: List[str] = None,
**kwargs,
):
estimand = identify_effect_auto(
graph,
treatment_name,
outcome_name,
self.estimand_type,
conditional_node_names,
self.backdoor_adjustment,
self._proceed_when_unidentifiable,
self.optimize_backdoor,
self.costs,
**kwargs,
)
estimand.identifier = self
return estimand
def identify_backdoor(
self,
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
return identify_backdoor(
graph,
treatment_name,
outcome_name,
self.backdoor_adjustment,
include_unobserved,
dseparation_algo,
direct_effect,
)
def identify_effect_auto(
graph: CausalGraph,
treatment_name: Union[str, List[str]],
outcome_name: Union[str, List[str]],
estimand_type: EstimandType,
conditional_node_names: List[str] = None,
backdoor_adjustment: BackdoorAdjustment = BackdoorAdjustment.BACKDOOR_DEFAULT,
proceed_when_unidentifiable: bool = False,
optimize_backdoor: bool = False,
costs: Optional[List] = None,
**kwargs,
) -> IdentifiedEstimand:
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param optimize_backdoor: if True, uses an optimised algorithm to compute the backdoor sets
:param costs: non-negative costs associated with variables in the graph. Only used
for estimand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
no costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph.
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention is static.
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
treatment_name = parse_state(treatment_name)
outcome_name = parse_state(outcome_name)
# First, check if there is a directed path from action to outcome
if not graph.has_directed_path(treatment_name, outcome_name):
logger.warn("No directed path from treatment to outcome. Causal Effect is zero.")
return IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
no_directed_path=True,
)
if estimand_type == EstimandType.NONPARAMETRIC_ATE:
return identify_ate_effect(
graph,
treatment_name,
outcome_name,
backdoor_adjustment,
optimize_backdoor,
estimand_type,
costs,
conditional_node_names,
proceed_when_unidentifiable,
)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return identify_nde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_NIE:
return identify_nie_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
elif estimand_type == EstimandType.NONPARAMETRIC_CDE:
return identify_cde_effect(
graph, treatment_name, outcome_name, backdoor_adjustment, estimand_type, proceed_when_unidentifiable
)
else:
raise ValueError(
"Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
EstimandType.NONPARAMETRIC_ATE,
EstimandType.NONPARAMETRIC_CDE,
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
def identify_ate_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
optimize_backdoor: bool,
estimand_type: EstimandType,
costs: List,
conditional_node_names: List[str] = None,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# Pick algorithm to compute backdoor sets according to method chosen
if backdoor_adjustment not in EFFICIENT_METHODS:
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
else:
from dowhy.causal_identifier.backdoor import Backdoor
path = Backdoor(graph._graph, treatment_name, outcome_name)
backdoor_sets = path.get_backdoor_vars()
elif backdoor_adjustment in EFFICIENT_METHODS:
backdoor_sets = identify_efficient_backdoor(
graph, backdoor_adjustment, costs, conditional_node_names=conditional_node_names
)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = graph.get_instruments(treatment_name, outcome_name)
logger.info("Instrumental variables for treatment and outcome:" + str(instrument_names))
if len(instrument_names) > 0:
iv_estimand_expr = construct_iv_estimand(
treatment_name,
outcome_name,
instrument_names,
)
logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = identify_frontdoor(graph, treatment_name, outcome_name)
logger.info("Frontdoor variables for treatment and outcome:" + str(frontdoor_variables_names))
if len(frontdoor_variables_names) > 0:
frontdoor_estimand_expr = construct_frontdoor_estimand(
treatment_name,
outcome_name,
frontdoor_variables_names,
)
logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, frontdoor_variables_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, frontdoor_variables_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_cde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
"""Identify controlled direct effect. For a definition, see Vanderwheele (2011).
Controlled direct and mediated effects: definition, identification and bounds.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193506/
Using do-calculus rules, identification yields a adjustment set.
It is based on the principle that under a graph where the direct edge from treatment
to outcome is removed, conditioning on the adjustment set should d-separate
treatment and outcome.
"""
estimands_dict = {}
# Pick algorithm to compute backdoor sets according to method chosen
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment, direct_effect=True)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_nie_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {} # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_nde_effect(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
estimand_type: EstimandType,
proceed_when_unidentifiable: bool = False,
):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = identify_backdoor(graph, treatment_name, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph, treatment_name, outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = identify_mediation(graph, treatment_name, outcome_name)
logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = construct_mediation_estimand(
estimand_type,
treatment_name,
outcome_name,
mediators_names,
)
logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = identify_mediation_first_stage_confounders(
graph, treatment_name, outcome_name, mediators_names, backdoor_adjustment
)
mediation_second_stage_confounders = identify_mediation_second_stage_confounders(
graph, treatment_name, mediators_names, outcome_name, backdoor_adjustment
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
None,
treatment_variable=treatment_name,
outcome_variable=outcome_name,
estimand_type=estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_backdoor(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: str,
backdoor_adjustment: BackdoorAdjustment,
include_unobserved: bool = False,
dseparation_algo: str = "default",
direct_effect: bool = False,
):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = graph.get_backdoor_paths(treatment_name, outcome_name)
elif dseparation_algo == "default":
bdoor_graph = graph.do_surgery(
treatment_name,
target_node_names=outcome_name,
remove_outgoing_edges=True,
remove_only_direct_edges_to_target=direct_effect,
)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
backdoor_adjustment = (
backdoor_adjustment if backdoor_adjustment != BackdoorAdjustment.BACKDOOR_DEFAULT else DEFAULT_BACKDOOR_METHOD
)
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
empty_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = (
graph.get_all_nodes(include_unobserved=include_unobserved) - set(treatment_name) - set(outcome_name)
)
if direct_effect:
# only remove descendants of Y
# also allow any causes of Y that are not caused by T (for lower variance)
eligible_variables -= graph.get_descendants(outcome_name)
else:
# remove descendants of T (mediators) and descendants of Y
eligible_variables -= graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = graph.check_dseparation(treatment_name, parse_state(var), set())
dsep_outcome_var = graph.check_dseparation(outcome_name, parse_state(var), set())
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if backdoor_adjustment in METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=backdoor_adjustment,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_DEFAULT and found_valid_adjustment_set:
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = find_valid_adjustment_sets(
graph,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN,
max_iterations=MAX_BACKDOOR_ITERATIONS,
)
else:
raise ValueError(
f"Identifier method {backdoor_adjustment} not supported. Try one of the following: {METHOD_NAMES}"
)
return backdoor_sets
def identify_efficient_backdoor(
graph: CausalGraph,
backdoor_adjustment: BackdoorAdjustment,
costs: List,
conditional_node_names: List[str] = None,
):
"""Method implementing algorithms to compute efficient backdoor sets, as
described in Rotnitzky and Smucler (2020), Smucler, Sapienza and Rotnitzky (2021)
and Smucler and Rotnitzky (2022).
For backdoor_adjustment='efficient-adjustment', computes an optimal backdoor set,
that is, a backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable backdoor sets. This optimal backdoor
set always exists when no variables are latent, and the algorithm is guaranteed to compute
it in this case. Under a non-parametric graphical model with latent variables,
such a backdoor set can fail to exist. When certain sufficient conditions under which it is
known that such a backdoor set exists are not satisfied, an error is raised.
For backdoor_adjustment='efficient-minimal-adjustment', computes an optimal minimal backdoor set,
that is, a minimal backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimal backdoor sets.
For backdoor_adjustment='efficient-mincost-adjustment', computes an optimal minimum cost backdoor set,
that is, a minimum cost backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimum cost backdoor sets. The cost
of a backdoor set is defined as the sum of the costs of the variables that comprise it.
The various optimal backdoor sets computed by this method are not only optimal under
non-parametric graphical models and non-parametric estimators of interventional mean,
but also under linear graphical models and OLS estimators, per results in Henckel, Perkovic
and Maathuis (2020).
:param costs: a list with non-negative costs associated with variables in the graph. Only used
for estimatand_type='non-parametric-ate' and backdoor_adjustment='efficient-mincost-adjustment'. If
not costs are provided by the user, and backdoor_adjustment='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph. The structure of the list should
be of the form [(node, {"cost": x}) for node in nodes].
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention sets the treatment to a constant.
:returns: backdoor_sets, a list of dictionaries, with each dictionary
having as values a backdoor set.
"""
if costs is None and backdoor_adjustment == "efficient-mincost-adjustment":
logger.warning("No costs were passed, so they will be assumed to be constant and equal to 1.")
efficient_bd = EfficientBackdoor(
graph=graph,
conditional_node_names=conditional_node_names,
costs=costs,
)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EFFICIENT:
backdoor_set = efficient_bd.optimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT:
backdoor_set = efficient_bd.optimal_minimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT:
backdoor_set = efficient_bd.optimal_mincost_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
return backdoor_sets
def find_valid_adjustment_sets(
graph: CausalGraph,
treatment_name: List,
outcome_name: List,
backdoor_paths: List,
bdoor_graph: CausalGraph,
dseparation_algo: str,
backdoor_sets: List,
filt_eligible_variables: List,
backdoor_adjustment: BackdoorAdjustment,
max_iterations: int,
):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = graph.all_observed(graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = (
range(1, len(filt_eligible_variables) + 1, 1)
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_MIN
else range(len(filt_eligible_variables), 0, -1)
)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(filt_eligible_variables, size_candidate_set):
check = graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
candidate_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
logger.debug(
"Candidate backdoor set: {0}, is_dseparated: {1}".format(candidate_set, check["is_dseparated"])
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if backdoor_adjustment == BackdoorAdjustment.BACKDOOR_EXHAUSTIVE and num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached.")
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if (
backdoor_adjustment
in {
BackdoorAdjustment.BACKDOOR_DEFAULT,
BackdoorAdjustment.BACKDOOR_MAX,
BackdoorAdjustment.BACKDOOR_MIN,
}
and found_valid_adjustment_set
):
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if (
backdoor_adjustment in {BackdoorAdjustment.BACKDOOR_DEFAULT, BackdoorAdjustment.BACKDOOR_MAX}
and all_nodes_observed
):
break
if num_iterations > max_iterations:
logger.warning(f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set.")
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], backdoor_sets_dict: Dict
):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(graph.get_instruments(treatment_name, outcome_name))
iv_count_dict = {
key: len(set(bdoor_set).intersection(instrument_names)) for key, bdoor_set in backdoor_sets_dict.items()
}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count}
min_iv_backdoor_sets_dict = {key: backdoor_sets_dict[key] for key in min_iv_keys}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
backdoor_sets: List[str],
estimands_dict: Dict,
):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed."""
backdoor_variables_dict = {}
is_identified = [graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets]
if any(is_identified):
logger.info("Causal effect can be identified.")
backdoor_sets_arr = [
list(bset["backdoor_set"]) for bset in backdoor_sets if graph.all_observed(bset["backdoor_set"])
]
else: # there is unobserved confounding
logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = construct_backdoor_estimand(treatment_name, outcome_name, backdoor_sets_arr[i])
logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor" + str(i + 1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor" + str(i + 1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(
graph: CausalGraph, treatment_name: List[str], outcome_name: List[str], dseparation_algo: str = "default"
):
"""Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = graph.do_surgery(treatment_name, remove_incoming_edges=True)
bdoor_graph1 = graph.do_surgery(treatment_name, remove_outgoing_edges=True)
elif dseparation_algo == "naive":
frontdoor_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
eligible_variables = (
graph.get_descendants(treatment_name) - set(outcome_name) - set(graph.get_descendants(outcome_name))
)
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = graph.check_valid_frontdoor_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo,
)
logger.debug("Candidate frontdoor set: {0}, is_dseparated: {1}".format(candidate_var, cond1))
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = graph.check_valid_backdoor_set(
treatment_name,
parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph=bdoor_graph1,
dseparation_algo=dseparation_algo,
)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = graph.do_surgery(candidate_var, remove_outgoing_edges=True)
cond3 = graph.check_valid_backdoor_set(
parse_state(candidate_var),
outcome_name,
treatment_name,
backdoor_paths=None,
new_graph=bdoor_graph2,
dseparation_algo=dseparation_algo,
)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(graph: CausalGraph, treatment_name: List[str], outcome_name: List[str]):
"""Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = graph.get_all_directed_paths(treatment_name, outcome_name)
eligible_variables = graph.get_descendants(treatment_name) - set(outcome_name)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = graph.check_valid_mediation_set(
treatment_name,
outcome_name,
parse_state(candidate_var),
mediation_paths=mediation_paths,
)
logger.debug("Candidate mediation set: {0}, on_mediating_path: {1}".format(candidate_var, is_valid_mediation))
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
def identify_mediation_first_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
outcome_name: List[str],
mediators_names: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, treatment_name, mediators_names, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(
graph: CausalGraph,
treatment_name: List[str],
mediators_names: List[str],
outcome_name: List[str],
backdoor_adjustment: BackdoorAdjustment,
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = identify_backdoor(graph, mediators_names, outcome_name, backdoor_adjustment)
estimands_dict, backdoor_variables_dict = build_backdoor_estimands_dict(
graph,
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = get_default_backdoor_set_id(graph, treatment_name, outcome_name, backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def construct_backdoor_estimand(treatment_name: List[str], outcome_name: List[str], common_causes: List[str]):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes) > 0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
"Unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_iv_estimand(treatment_name: List[str], outcome_name: List[str], instrument_names: List[str]):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative / sym_treatment_derivative)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then " "\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(",".join(instrument_names), ",".join(treatment_name), outcome_name),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_frontdoor_estimand(
treatment_name: List[str], outcome_name: List[str], frontdoor_variables_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
sym_assumptions = {
"Full-mediation": ("{2} intercepts (blocks) all directed paths from {0} to {1}.").format(
",".join(treatment_name),
",".join(outcome_name),
",".join(frontdoor_variables_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}" " then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name),
outcome_name,
",".join(frontdoor_variables_names),
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_mediation_estimand(
estimand_type: EstimandType, treatment_name: List[str], outcome_name: List[str], mediators_names: List[str]
):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names) > 0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(sym_conditional_outcome, sym_treatment)
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(mediators_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}" " then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(mediators_names)),
}
else:
raise ValueError(
"Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
EstimandType.NONPARAMETRIC_NDE,
EstimandType.NONPARAMETRIC_NIE,
)
)
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | I was reading the code and I believe it always returns an `IdentifiedEstimand` object, I'm not 100% sure if there's a case in which it returns `None`. It does raise errors in many places, but does not return None | andresmor-ms | 301 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuter.py | import logging
import random
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
self.logger = logging.getLogger(__name__)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
self.logger.error(attr_error)
def choose_variables(self, required_variables):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return self._variables_of_interest
elif type(required_variables) is int:
if len(self._variables_of_interest) < required_variables:
self.logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(self._variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
random.shuffle(self._variables_of_interest)
return self._variables_of_interest[:required_variables]
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
self.logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(self._variables_of_interest) != set([]):
self.logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(self._variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(self._variables_of_interest) - set(required_variables))
else:
self.logger.error("Incorrect type: {}. Expected an int,list or bool".format(type(required_variables)))
raise TypeError("Expected int, list or bool. Got an unexpected datatype")
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == "auto":
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
self.logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
else:
self.logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
elif test_type == "bootstrap":
self.logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
elif test_type == "normal_test":
self.logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
else:
raise NotImplementedError
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
def perform_bootstrap_test(self, estimate, simulations):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(self, estimate, simulations):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| import logging
import random
from enum import Enum
from typing import List, Union
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class SignificanceTestType(Enum):
AUTO = "auto"
BOOTSTRAP = "bootstrap"
NORMAL = "normal_test"
logger = logging.getLogger(__name__)
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call refute_method_name() functions
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
logger.error(attr_error)
def choose_variables(self, required_variables):
return choose_variables(required_variables, self._variables_of_interest)
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
return test_significance(estimate, simulations, SignificanceTestType(test_type), significance_level)
def perform_bootstrap_test(self, estimate, simulations):
return perform_bootstrap_test(estimate, simulations)
def perform_normal_distribution_test(self, estimate, simulations):
return perform_normal_distribution_test(estimate, simulations)
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
def choose_variables(required_variables: Union[bool, int, list], variables_of_interest: List):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return variables_of_interest
elif type(required_variables) is int:
if len(variables_of_interest) < required_variables:
logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
return random.sample(variables_of_interest, required_variables)
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(variables_of_interest) != set([]):
logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(variables_of_interest) - set(required_variables))
def perform_bootstrap_test(estimate, simulations: List):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(estimate, simulations: List):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def test_significance(
estimate,
simulations: List,
test_type: SignificanceTestType = SignificanceTestType.AUTO,
significance_level: float = 0.85,
):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == SignificanceTestType.AUTO:
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
else:
logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
elif test_type == SignificanceTestType.BOOTSTRAP:
logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
elif test_type == SignificanceTestType.NORMAL:
logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | typo in boostrap | amit-sharma | 302 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuter.py | import logging
import random
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
self.logger = logging.getLogger(__name__)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
self.logger.error(attr_error)
def choose_variables(self, required_variables):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return self._variables_of_interest
elif type(required_variables) is int:
if len(self._variables_of_interest) < required_variables:
self.logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(self._variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
random.shuffle(self._variables_of_interest)
return self._variables_of_interest[:required_variables]
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
self.logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(self._variables_of_interest) != set([]):
self.logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(self._variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(self._variables_of_interest) - set(required_variables))
else:
self.logger.error("Incorrect type: {}. Expected an int,list or bool".format(type(required_variables)))
raise TypeError("Expected int, list or bool. Got an unexpected datatype")
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == "auto":
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
self.logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
else:
self.logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
elif test_type == "bootstrap":
self.logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
elif test_type == "normal_test":
self.logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
else:
raise NotImplementedError
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
def perform_bootstrap_test(self, estimate, simulations):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(self, estimate, simulations):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| import logging
import random
from enum import Enum
from typing import List, Union
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class SignificanceTestType(Enum):
AUTO = "auto"
BOOTSTRAP = "bootstrap"
NORMAL = "normal_test"
logger = logging.getLogger(__name__)
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call refute_method_name() functions
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
logger.error(attr_error)
def choose_variables(self, required_variables):
return choose_variables(required_variables, self._variables_of_interest)
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
return test_significance(estimate, simulations, SignificanceTestType(test_type), significance_level)
def perform_bootstrap_test(self, estimate, simulations):
return perform_bootstrap_test(estimate, simulations)
def perform_normal_distribution_test(self, estimate, simulations):
return perform_normal_distribution_test(estimate, simulations)
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
def choose_variables(required_variables: Union[bool, int, list], variables_of_interest: List):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return variables_of_interest
elif type(required_variables) is int:
if len(variables_of_interest) < required_variables:
logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
return random.sample(variables_of_interest, required_variables)
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(variables_of_interest) != set([]):
logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(variables_of_interest) - set(required_variables))
def perform_bootstrap_test(estimate, simulations: List):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(estimate, simulations: List):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def test_significance(
estimate,
simulations: List,
test_type: SignificanceTestType = SignificanceTestType.AUTO,
significance_level: float = 0.85,
):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == SignificanceTestType.AUTO:
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
else:
logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
elif test_type == SignificanceTestType.BOOTSTRAP:
logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
elif test_type == SignificanceTestType.NORMAL:
logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | I assume that CausalRefuter class is kept only for backwards compatibility? In that case, it will be good to add that to the docstring. | amit-sharma | 303 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuter.py | import logging
import random
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
self.logger = logging.getLogger(__name__)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
self.logger.error(attr_error)
def choose_variables(self, required_variables):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return self._variables_of_interest
elif type(required_variables) is int:
if len(self._variables_of_interest) < required_variables:
self.logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(self._variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
random.shuffle(self._variables_of_interest)
return self._variables_of_interest[:required_variables]
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
self.logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(self._variables_of_interest) != set([]):
self.logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(self._variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(self._variables_of_interest) - set(required_variables))
else:
self.logger.error("Incorrect type: {}. Expected an int,list or bool".format(type(required_variables)))
raise TypeError("Expected int, list or bool. Got an unexpected datatype")
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == "auto":
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
self.logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
else:
self.logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
elif test_type == "bootstrap":
self.logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
elif test_type == "normal_test":
self.logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
else:
raise NotImplementedError
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
def perform_bootstrap_test(self, estimate, simulations):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(self, estimate, simulations):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| import logging
import random
from enum import Enum
from typing import List, Union
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class SignificanceTestType(Enum):
AUTO = "auto"
BOOTSTRAP = "bootstrap"
NORMAL = "normal_test"
logger = logging.getLogger(__name__)
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call refute_method_name() functions
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
logger.error(attr_error)
def choose_variables(self, required_variables):
return choose_variables(required_variables, self._variables_of_interest)
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
return test_significance(estimate, simulations, SignificanceTestType(test_type), significance_level)
def perform_bootstrap_test(self, estimate, simulations):
return perform_bootstrap_test(estimate, simulations)
def perform_normal_distribution_test(self, estimate, simulations):
return perform_normal_distribution_test(estimate, simulations)
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
def choose_variables(required_variables: Union[bool, int, list], variables_of_interest: List):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return variables_of_interest
elif type(required_variables) is int:
if len(variables_of_interest) < required_variables:
logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
return random.sample(variables_of_interest, required_variables)
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(variables_of_interest) != set([]):
logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(variables_of_interest) - set(required_variables))
def perform_bootstrap_test(estimate, simulations: List):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(estimate, simulations: List):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def test_significance(
estimate,
simulations: List,
test_type: SignificanceTestType = SignificanceTestType.AUTO,
significance_level: float = 0.85,
):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == SignificanceTestType.AUTO:
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
else:
logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
elif test_type == SignificanceTestType.BOOTSTRAP:
logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
elif test_type == SignificanceTestType.NORMAL:
logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | Yep, that's correct, I'll add it in the next commit | andresmor-ms | 304 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuter.py | import logging
import random
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
self.logger = logging.getLogger(__name__)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
self.logger.error(attr_error)
def choose_variables(self, required_variables):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
self.logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return self._variables_of_interest
elif type(required_variables) is int:
if len(self._variables_of_interest) < required_variables:
self.logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(self._variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
random.shuffle(self._variables_of_interest)
return self._variables_of_interest[:required_variables]
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
self.logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(self._variables_of_interest) != set([]):
self.logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(self._variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(self._variables_of_interest) - set(required_variables))
else:
self.logger.error("Incorrect type: {}. Expected an int,list or bool".format(type(required_variables)))
raise TypeError("Expected int, list or bool. Got an unexpected datatype")
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == "auto":
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
self.logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
else:
self.logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
elif test_type == "bootstrap":
self.logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = self.perform_bootstrap_test(estimate, simulations)
elif test_type == "normal_test":
self.logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = self.perform_normal_distribution_test(estimate, simulations)
else:
raise NotImplementedError
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
def perform_bootstrap_test(self, estimate, simulations):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(self, estimate, simulations):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| import logging
import random
from enum import Enum
from typing import List, Union
import numpy as np
import scipy.stats as st
from dowhy.utils.api import parse_state
class SignificanceTestType(Enum):
AUTO = "auto"
BOOTSTRAP = "bootstrap"
NORMAL = "normal_test"
logger = logging.getLogger(__name__)
class CausalRefuter:
"""Base class for different refutation methods.
Subclasses implement specific refutations methods.
# todo: add docstring for common parameters here and remove from child refuter classes
This class is for backwards compatibility with CausalModel
Will be deprecated in the future in favor of function call refute_method_name() functions
"""
# Default value for the number of simulations to be conducted
DEFAULT_NUM_SIMULATIONS = 100
PROGRESS_BAR_COLOR = "green"
def __init__(self, data, identified_estimand, estimate, **kwargs):
self._data = data
self._target_estimand = identified_estimand
self._estimate = estimate
self._treatment_name = self._target_estimand.treatment_variable
self._outcome_name = self._target_estimand.outcome_variable
self._random_seed = None
# joblib params for parallel processing
self._n_jobs = kwargs.pop("n_jobs", None)
self._verbose = kwargs.pop("verbose", 0)
if "random_seed" in kwargs:
self._random_seed = kwargs["random_seed"]
np.random.seed(self._random_seed)
# Concatenate the confounders, instruments and effect modifiers
try:
self._variables_of_interest = (
self._target_estimand.get_backdoor_variables()
+ self._target_estimand.instrumental_variables
+ self._estimate.params["effect_modifiers"]
)
except AttributeError as attr_error:
logger.error(attr_error)
def choose_variables(self, required_variables):
return choose_variables(required_variables, self._variables_of_interest)
def test_significance(self, estimate, simulations, test_type="auto", significance_level=0.05):
return test_significance(estimate, simulations, SignificanceTestType(test_type), significance_level)
def perform_bootstrap_test(self, estimate, simulations):
return perform_bootstrap_test(estimate, simulations)
def perform_normal_distribution_test(self, estimate, simulations):
return perform_normal_distribution_test(estimate, simulations)
def refute_estimate(self, show_progress_bar=False):
raise NotImplementedError
def choose_variables(required_variables: Union[bool, int, list], variables_of_interest: List):
"""
This method provides a way to choose the confounders whose values we wish to
modify for finding its effect on the ability of the treatment to affect the outcome.
"""
invert = None
if required_variables is False:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return None
elif required_variables is True:
logger.info(
"All variables required: Running bootstrap adding noise to confounders, instrumental variables and effect modifiers."
)
return variables_of_interest
elif type(required_variables) is int:
if len(variables_of_interest) < required_variables:
logger.error(
"Too many variables passed.\n The number of variables is: {}.\n The number of variables passed: {}".format(
len(variables_of_interest), required_variables
)
)
raise ValueError(
"The number of variables in the required_variables is greater than the number of confounders, instrumental variables and effect modifiers"
)
else:
# Shuffle the confounders
return random.sample(variables_of_interest, required_variables)
elif type(required_variables) is list:
# Check if all are select or deselect variables
if all(variable[0] == "-" for variable in required_variables):
invert = True
required_variables = [variable[1:] for variable in required_variables]
elif all(variable[0] != "-" for variable in required_variables):
invert = False
else:
logger.error("{} has both select and delect variables".format(required_variables))
raise ValueError(
"It appears that there are some select and deselect variables. Note you can either select or delect variables at a time, but not both"
)
# Check if all the required_variables belong to confounders, instrumental variables or effect
if set(required_variables) - set(variables_of_interest) != set([]):
logger.error(
"{} are not confounder, instrumental variable or effect modifier".format(
list(set(required_variables) - set(variables_of_interest))
)
)
raise ValueError(
"At least one of required_variables is not a valid variable name, or it is not a confounder, instrumental variable or effect modifier"
)
if invert is False:
return required_variables
elif invert is True:
return list(set(variables_of_interest) - set(required_variables))
def perform_bootstrap_test(estimate, simulations: List):
# Get the number of simulations
num_simulations = len(simulations)
# Sort the simulations
simulations.sort()
# Obtain the median value
median_refute_values = simulations[int(num_simulations / 2)]
# Performing a two sided test
if estimate.value > median_refute_values:
# np.searchsorted tells us the index if it were a part of the array
# We select side to be left as we want to find the first value that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="left")
# We subtact 1 as we are finding the value from the right tail
p_value = 1 - (estimate_index / num_simulations)
else:
# We take the side to be right as we want to find the last index that matches
estimate_index = np.searchsorted(simulations, estimate.value, side="right")
# We get the probability with respect to the left tail.
p_value = estimate_index / num_simulations
# return twice the determined quantile as this is a two sided test
return 2 * p_value
def perform_normal_distribution_test(estimate, simulations: List):
# Get the mean for the simulations
mean_refute_values = np.mean(simulations)
# Get the standard deviation for the simulations
std_dev_refute_values = np.std(simulations)
# Get the Z Score [(val - mean)/ std_dev ]
z_score = (estimate.value - mean_refute_values) / std_dev_refute_values
if z_score > 0: # Right Tail
p_value = 1 - st.norm.cdf(z_score)
else: # Left Tail
p_value = st.norm.cdf(z_score)
return p_value
def test_significance(
estimate,
simulations: List,
test_type: SignificanceTestType = SignificanceTestType.AUTO,
significance_level: float = 0.85,
):
"""Tests the statistical significance of the estimate obtained to the simulations produced by a refuter.
The basis behind using the sample statistics of the refuter when we are in fact testing the estimate,
is due to the fact that, we would ideally expect them to follow the same distribition.
For refutation tests (e.g., placebo refuters), consider the null distribution as a distribution of effect
estimates over multiple simulations with placebo treatment, and compute how likely the true estimate (e.g.,
zero for placebo test) is under the null. If the probability of true effect estimate is lower than the
p-value, then estimator method fails the test.
For sensitivity analysis tests (e.g., bootstrap, subset or common cause refuters), the null distribution captures
the distribution of effect estimates under the "true" dataset (e.g., with an additional confounder or different
sampling), and we compute the probability of the obtained estimate under this distribution. If the probability is
lower than the p-value, then the estimator method fails the test.
Null Hypothesis- The estimate is a part of the distribution
Alternative Hypothesis- The estimate does not fall in the distribution.
:param 'estimate': CausalEstimate
The estimate obtained from the estimator for the original data.
:param 'simulations': np.array
An array containing the result of the refuter for the simulations
:param 'test_type': string, default 'auto'
The type of test the user wishes to perform.
:param 'significance_level': float, default 0.05
The significance level for the statistical test
:returns: significance_dict: Dict
A Dict containing the p_value and a boolean that indicates if the result is statistically significant
"""
# Initializing the p_value
p_value = 0
if test_type == SignificanceTestType.AUTO:
num_simulations = len(simulations)
if num_simulations >= 100: # Bootstrapping
logger.info(
"Making use of Bootstrap as we have more than 100 examples.\n \
Note: The greater the number of examples, the more accurate are the confidence estimates"
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
else:
logger.warning(
"We assume a Normal Distribution as the sample has less than 100 examples.\n \
Note: The underlying distribution may not be Normal. We assume that it approaches normal with the increase in sample size."
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
elif test_type == SignificanceTestType.BOOTSTRAP:
logger.info(
"Performing Bootstrap Test with {} samples\n \
Note: The greater the number of examples, the more accurate are the confidence estimates".format(
len(simulations)
)
)
# Perform Bootstrap Significance Test with the original estimate and the set of refutations
p_value = perform_bootstrap_test(estimate, simulations)
elif test_type == SignificanceTestType.NORMAL:
logger.info(
"Performing Normal Test with {} samples\n \
Note: We assume that the underlying distribution is Normal.".format(
len(simulations)
)
)
# Perform Normal Tests of Significance with the original estimate and the set of refutations
p_value = perform_normal_distribution_test(estimate, simulations)
significance_dict = {"p_value": p_value, "is_statistically_significant": p_value <= significance_level}
return significance_dict
class CausalRefutation:
"""Class for storing the result of a refutation method."""
def __init__(self, estimated_effect, new_effect, refutation_type):
self.estimated_effect = estimated_effect
self.new_effect = new_effect
self.refutation_type = refutation_type
self.refutation_result = None
def add_significance_test_results(self, refutation_result):
self.refutation_result = refutation_result
def add_refuter(self, refuter_instance):
self.refuter = refuter_instance
def interpret(self, method_name=None, **kwargs):
"""Interpret the refutation results.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific refuter is used.
:returns: None
"""
if method_name is None:
method_name = self.refuter.interpret_method
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
if self.refutation_result is None:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect
)
else:
return "{0}\nEstimated effect:{1}\nNew effect:{2}\np value:{3}\n".format(
self.refutation_type, self.estimated_effect, self.new_effect, self.refutation_result["p_value"]
)
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | found more instances of this typo, fixed them all | andresmor-ms | 305 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | what would be a good naming convention? shall we keep the "_" prefix for non-user facing functions? Or is there a more pythonic way of doing this? | amit-sharma | 306 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | this is the list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method) | amit-sharma | 307 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s). | amit-sharma | 308 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | with the functional API, is there a way to ensure that any future refuter functions follow the same basic signature?
E.g., all refute functions should have a data, target_estimand and estimate? I'm thinking of making it easier to for new contributors to know the basic protocol that they need to adhere to.
something like `def refute_*(data, target_estimand, estimate, *args, **kwargs)` | amit-sharma | 309 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | I'm adding the `_` everywhere I see a non-public function/method, also I'm adding the public facing functions into the `__init__.py` file of the module. I guess that's the pythonic way, I don't think there's a way to prevent the user from importing/using those functions. | andresmor-ms | 310 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | I believe there is not a specific way of doing this, however the main `refute_estimate(data, target_estimand, estimate, ...)` partially serves this purpose, in the future if any developer wants the refuter to be callable through this function then the signature must comply with the type: `Callable[..., Union[CausalRefutation, List[CausalRefutation]]]`, which at least hints the developer to return a CausalRefutation or list of CausalRefutation as result. Or we can create as you say a protocol like this:
```python
class Refuter(Protocol):
def __call__(self, data: pd.DataFrame, target_estimand: IdentifiedEstimand, estimate: CausalEstimate, *args, **kwargs) -> Union[CausalRefutation, List[CausalRefutation]]
```
and then use it as the type hint in the `refute_estimate(..., refuters=List[Refuter])`
https://mypy.readthedocs.io/en/latest/protocols.html#callback-protocols
I don't really think this is super useful as we also have PR mechanism that we could use to enforce compatibility, but if you consider we should add it like this, then let me know and I'll include it in this PR. | andresmor-ms | 311 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | This sound good. We can follow this policy throughout. | amit-sharma | 312 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | Got, thanks for clarifying. I do agree that adding a class as protocol may not be needed. We can do it through PR reviews and docs to communicate the expected signature.
Perhaps the flexibility will also be useful for any future method that may not require to take in the usual parameters. | amit-sharma | 313 |
py-why/dowhy | 672 | Functional api/refute estimate | * Refactor refuters into functions
* Rename functional_api notebook for clarity
* Add return types to identify_estimate
* Update `__init__.py` for imports
* Add joblib for bootstrap refuter
* Create `refute_estimate` function
* Add types for refuter parameters & return types | null | 2022-10-04 16:18:49+00:00 | 2022-10-07 04:30:22+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "e-value":
if not isinstance(self._estimate.estimator, RegressionEstimator):
raise NotImplementedError(
"E-Value sensitivity analysis is currently only implemented RegressionEstimator."
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=self._estimate,
estimand=self._target_estimand,
data=self._data,
treatment_name=self._treatment_name[0],
outcome_name=self._outcome_name[0],
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 133e7b9a4ed32aae8ab5f39a01eb02b3a4d1c0ba | e1652ec3c6606b1bb2dfe91ef830e4b4b566712d | This removal is introducing a non-backwards-compatible change. Will create an issue to address this. | petergtz | 314 |
py-why/dowhy | 641 | Update README.rst | Added direct link to https://github.com/py-why/dowhy/blob/main/docs/source/contributing/contributing-code.rst that has detailed instructions for code contribution steps
Signed-off-by: emrekiciman <[email protected]> | null | 2022-09-17 01:02:37+00:00 | 2022-09-22 18:33:00+00:00 | README.rst | |BuildStatus|_ |PyPiVersion|_ |PythonSupport|_ |Downloads|_
.. |PyPiVersion| image:: https://img.shields.io/pypi/v/dowhy.svg
.. _PyPiVersion: https://pypi.org/project/dowhy/
.. |PythonSupport| image:: https://img.shields.io/pypi/pyversions/dowhy.svg
.. _PythonSupport: https://pypi.org/project/dowhy/
.. |BuildStatus| image:: https://github.com/microsoft/dowhy/workflows/Python%20package/badge.svg
.. _BuildStatus: https://github.com/microsoft/dowhy/actions
.. |Downloads| image:: https://pepy.tech/badge/dowhy
.. _Downloads: https://pepy.tech/project/dowhy
DoWhy | An end-to-end library for causal inference
===================================================
Introducing DoWhy and the 4 steps of causal inference | `Microsoft Research Blog <https://www.microsoft.com/en-us/research/blog/dowhy-a-library-for-causal-inference/>`_ | `Video Tutorial <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_ | `Arxiv Paper <https://arxiv.org/abs/2011.04216>`_ | `Arxiv Paper (GCM-extension) <https://arxiv.org/abs/2206.06821>`_ | `Slides <https://www2.slideshare.net/AmitSharma315/dowhy-an-endtoend-library-for-causal-inference>`_
Read the `docs <https://py-why.github.io/dowhy/>`_ | Try it online! |Binder|_
.. |Binder| image:: https://mybinder.org/badge_logo.svg
.. _Binder: https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F
**Case Studies using DoWhy**: `Hotel booking cancellations <https://towardsdatascience.com/beyond-predictive-models-the-causal-story-behind-hotel-booking-cancellations-d29e8558cbaf>`_ | `Effect of customer loyalty programs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_ | `Optimizing article headlines <https://medium.com/@akelleh/introducing-the-do-sampler-for-causal-inference-a3296ea9e78d>`_ | `Effect of home visits on infant health (IHDP) <https://towardsdatascience.com/implementing-causal-inference-a-key-step-towards-agi-de2cde8ea599>`_ | `Causes of customer churn/attrition <https://medium.com/geekculture/a-quickstart-for-causal-analysis-decision-making-with-dowhy-2ce2d4d1efa9>`_
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/dowhy-schematic.png
As computing systems are more frequently and more actively intervening in societally critical domains such as healthcare, education, and governance, it is critical to correctly predict and understand the causal effects of these interventions. Without an A/B test, conventional machine learning methods, built on pattern recognition and correlational analyses, are insufficient for decision-making.
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
For a quick introduction to causal inference, check out `amit-sharma/causal-inference-tutorial <https://github.com/amit-sharma/causal-inference-tutorial/>`_. We also gave a more comprehensive tutorial at the ACM Knowledge Discovery and Data Mining (`KDD 2018 <http://www.kdd.org/kdd2018/>`_) conference: `causalinference.gitlab.io/kdd-tutorial <http://causalinference.gitlab.io/kdd-tutorial/>`_. For an introduction to the four steps of causal inference and its implications for machine learning, you can access this video tutorial from Microsoft Research: `DoWhy Webinar <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_.
Documentation for DoWhy is available at `py-why.github.io/dowhy <https://py-why.github.io/dowhy/>`_.
.. i here comment toctree::
.. i here comment :maxdepth: 4
.. i here comment :caption: Contents:
.. contents:: **Contents**
News
-----
**2022.05.27**:
* **DoWhy now part of PyWhy**
We have moved DoWhy from microsoft/dowhy to py-why/dowhy. While GitHub will automatically
redirect your git command for cloning, pulling, etc., we recommend updating git remotes and bookmarks. Please note
that the **documentation** has now moved to https://py-why.github.io/dowhy with **no** redirect from the old URL.
* **Experimental support for GCM-based inference**
We have started adding support for graphical causal model-based inference (or in short GCM-based). At the moment,
this includes support for interventions, counterfactuals, and attributing distribution changes. As part of this,
we also added features for Shapley value estimation and independence tests. We're still in the process of fleshing
everything out, including `documentation <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. Some of it is already on `main
<https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_, other parts are on feature branches (prefixed with ``gcm-``) with open
pull-requests, other parts will appear as new pull-requests in the next couple of weeks. Be sure to watch this space
here as we quickly expand functionality and documentation.
The need for causal inference
----------------------------------
Predictive models uncover patterns that connect the inputs and outcome in observed data. To intervene, however, we need to estimate the effect of changing an input from its current value, for which no data exists. Such questions, involving estimating a *counterfactual*, are common in decision-making scenarios.
* Will it work?
* Does a proposed change to a system improve people's outcomes?
* Why did it work?
* What led to a change in a system's outcome?
* What should we do?
* What changes to a system are likely to improve outcomes for people?
* What are the overall effects?
* How does the system interact with human behavior?
* What is the effect of a system's recommendations on people's activity?
Answering these questions requires causal reasoning. While many methods exist
for causal inference, it is hard to compare their assumptions and robustness of results. DoWhy makes three contributions,
1. Provides a principled way of modeling a given problem as a causal graph so
that all assumptions are explicit.
2. Provides a unified interface for many popular causal inference methods, combining the two major frameworks of graphical models and potential outcomes.
3. Automatically tests for the validity of assumptions if possible and assesses
the robustness of the estimate to violations.
To see DoWhy in action, check out how it can be applied to estimate the effect
of a subscription or rewards program for customers [`Rewards notebook
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_] and for implementing and evaluating causal inference methods on benchmark datasets like the `Infant Health and Development Program (IHDP) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_ihdp_data_example.ipynb>`_ dataset, `Infant Mortality (Twins) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_twins_example.ipynb>`_ dataset, and the `Lalonde Jobs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_lalonde_example.ipynb>`_ dataset.
Installation
-------------
DoWhy support Python 3.6+. To install, you can use pip or conda.
**Latest Release**
Install the latest `release <https://pypi.org/project/dowhy/>`__ using pip.
.. code:: shell
pip install dowhy
Install the latest `release <https://anaconda.org/conda-forge/dowhy>`__ using conda.
.. code:: shell
conda install -c conda-forge dowhy
If you face "Solving environment" problems with conda, then try :code:`conda update --all` and then install dowhy. If that does not work, then use :code:`conda config --set channel_priority false` and try to install again. If the problem persists, please add your issue `here <https://github.com/microsoft/dowhy/issues/197>`_.
**Development Version**
If you prefer the latest dev version, clone this repository and run the following command from the top-most folder of
the repository.
.. code:: shell
pip install -e .
**Requirements**
DoWhy requires the following packages:
* numpy
* scipy
* scikit-learn
* pandas
* networkx (for analyzing causal graphs)
* matplotlib (for general plotting)
* sympy (for rendering symbolic expressions)
If you face any problems, try installing dependencies manually.
.. code:: shell
pip install -r requirements.txt
Optionally, if you wish to input graphs in the dot format, then install pydot (or pygraphviz).
For better-looking graphs, you can optionally install pygraphviz. To proceed,
first install graphviz and then pygraphviz (on Ubuntu and Ubuntu WSL).
.. code:: shell
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
## from https://github.com/pygraphviz/pygraphviz/issues/71
pip install pygraphviz --install-option="--include-path=/usr/include/graphviz" \
--install-option="--library-path=/usr/lib/graphviz/"
Sample causal inference analysis in DoWhy
-------------------------------------------
Most DoWhy
analyses for causal inference take 4 lines to write, assuming a
pandas dataframe df that contains the data:
.. code:: python
from dowhy import CausalModel
import dowhy.datasets
# Load some sample data
data = dowhy.datasets.linear_dataset(
beta=10,
num_common_causes=5,
num_instruments=2,
num_samples=10000,
treatment_is_binary=True)
DoWhy supports two formats for providing the causal graph: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and `dot <http://www.graphviz.org/documentation/>`_. After loading in the data, we use the four main operations in DoWhy: *model*,
*estimate*, *identify* and *refute*:
.. code:: python
# I. Create a causal model from the data and given graph.
model = CausalModel(
data=data["df"],
treatment=data["treatment_name"],
outcome=data["outcome_name"],
graph=data["gml_graph"])
# II. Identify causal effect and return target estimands
identified_estimand = model.identify_effect()
# III. Estimate the target estimand using a statistical method.
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching")
# IV. Refute the obtained estimate using multiple robustness checks.
refute_results = model.refute_estimate(identified_estimand, estimate,
method_name="random_common_cause")
DoWhy stresses on the interpretability of its output. At any point in the analysis,
you can inspect the untested assumptions, identified estimands (if any) and the
estimate (if any). Here's a sample output of the linear regression estimator.
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/regression_output.png
For a full code example, check out the `Getting Started with DoWhy <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_ notebook. You can also use Conditional Average Treatment Effect (CATE) estimation methods from other libraries such as EconML and CausalML, as shown in the `Conditional Treatment Effects <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook. For more examples of using DoWhy, check out the Jupyter notebooks in `docs/source/example_notebooks <https://github.com/microsoft/dowhy/tree/main/docs/source/example_notebooks/>`_ or try them online at `Binder <https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F>`_.
GCM-based inference (experimental)
----------------------------------
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy. For
details, check out the `documentation for the gcm sub-package <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. The basic
recipe for this API works as follows:
.. code:: python
# 1. Modeling cause-effect relationships as a structural causal model
# (causal graph + functional causal models):
scm = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
scm.set_causal_mechanism('X', gcm.EmpiricalDistribution())
scm.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
scm.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
# 2. Fitting the SCM to the data:
gcm.fit(scm, data)
# 3. Answering a causal query based on the SCM:
results = gcm.<causal_query>(scm, ...)
Where <causal_query> can be one of multiple functions explained in `Answering Causal Questions <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/answering_causal_questions/index.html>`_.
A high-level Pandas API
-----------------------
We've made an even simpler API for dowhy which is a light layer on top of the standard one. The goal is to make causal analysis much more like regular exploratory analysis. To use this API, simply
import :code:`dowhy.api`. This will magically add the :code:`causal` namespace to your
:code:`pandas.DataFrame` s. Then,
you can use the namespace as follows.
.. code:: python
import dowhy.api
import dowhy.datasets
data = dowhy.datasets.linear_dataset(beta=5,
num_common_causes=1,
num_instruments = 0,
num_samples=1000,
treatment_is_binary=True)
# data['df'] is just a regular pandas.DataFrame
data['df'].causal.do(x='v0', # name of treatment variable
variable_types={'v0': 'b', 'y': 'c', 'W0': 'c'},
outcome='y',
common_causes=['W0']).groupby('v0').mean().plot(y='y', kind='bar')
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/do_barplot.png
For some methods, the :code:`variable_types` field must be specified. It should be a :code:`dict`, where the keys are
variable names, and values are 'o' for ordered discrete, 'u' for un-ordered discrete, 'd' for discrete, or 'c'
for continuous.
**Note:If the** :code:`variable_types` **is not specified we make use of the following implicit conversions:**
::
int -> 'c'
float -> 'c'
binary -> 'b'
category -> 'd'
**Currently we have not added support for timestamps.**
The :code:`do` method in the causal namespace generates a random sample from $P(outcome|do(X=x))$ of the
same length as your data set, and returns this outcome as a new :code:`DataFrame`. You can continue to perform
the usual :code:`DataFrame` operations with this sample, and so you can compute statistics and create plots
for causal outcomes!
The :code:`do` method is built on top of the lower-level :code:`dowhy` objects, so can still take a graph and perform
identification automatically when you provide a graph instead of :code:`common_causes`.
For more details, check out the `Pandas API
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_causal_api.ipynb>`_ notebook or the `Do Sampler <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/do_sampler_demo.ipynb>`_
notebook.
Graphical Models and Potential Outcomes: Best of both worlds
============================================================
DoWhy builds on two of the most powerful frameworks for causal inference:
graphical models and potential outcomes. It uses graph-based criteria and
do-calculus for modeling assumptions and identifying a non-parametric causal effect.
For estimation, it switches to methods based primarily on potential outcomes.
A unifying language for causal inference
----------------------------------------
DoWhy is based on a simple unifying language for causal inference. Causal
inference may seem tricky, but almost all methods follow four key steps:
1. Model a causal inference problem using assumptions.
2. Identify an expression for the causal effect under these assumptions ("causal estimand").
3. Estimate the expression using statistical methods such as matching or instrumental variables.
4. Finally, verify the validity of the estimate using a variety of robustness checks.
This workflow can be captured by four key verbs in DoWhy:
- model
- identify
- estimate
- refute
Using these verbs, DoWhy implements a causal inference engine that can support
a variety of methods. *model* encodes prior knowledge as a formal causal graph, *identify* uses
graph-based methods to identify the causal effect, *estimate* uses
statistical methods for estimating the identified estimand, and finally *refute*
tries to refute the obtained estimate by testing robustness to assumptions.
Key differences compared to available causal inference software
----------------------------------------------------------------
DoWhy brings three key differences compared to available software for causal inference:
**Explicit identifying assumptions**
Assumptions are first-class citizens in DoWhy.
Each analysis starts with a
building a causal model. The assumptions can be viewed graphically or in terms
of conditional independence statements. Wherever possible, DoWhy can also
automatically test for stated assumptions using observed data.
**Separation between identification and estimation**
Identification is the causal problem. Estimation is simply a statistical problem.
DoWhy
respects this boundary and treats them separately. This focuses the causal
inference effort on identification, and frees up estimation using any
available statistical estimator for a target estimand. In addition, multiple
estimation methods can be used for a single identified_estimand and
vice-versa.
**Automated robustness checks**
What happens when key identifying assumptions may not be satisfied?
The most critical, and often skipped, part of causal analysis is checking the
robustness of an estimate to unverified assumptions. DoWhy makes it easy to
automatically run sensitivity and robustness checks on the obtained estimate.
Finally, DoWhy is easily extensible, allowing other implementations of the
four verbs to co-exist (e.g., we support implementations of the *estimation* verb from
EconML and CausalML libraries). The four verbs are mutually independent, so their
implementations can be combined in any way.
Below are more details about the current implementation of each of these verbs.
Four steps of causal inference
===============================
I. Model a causal problem
-----------------------------
DoWhy creates an underlying causal graphical model for each problem. This
serves to make each causal assumption explicit. This graph need not be
complete---you can provide a partial graph, representing prior
knowledge about some of the variables. DoWhy automatically considers the rest
of the variables as potential confounders.
Currently, DoWhy supports two formats for graph input: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and
`dot <http://www.graphviz.org/documentation/>`_. We strongly suggest to use gml as the input format, as it works well with networkx. You can provide the graph either as a .gml file or as a string. If you prefer to use dot format, you will need to install additional packages (pydot or pygraphviz, see the installation section above). Both .dot files and string format are supported.
While not recommended, you can also specify common causes and/or instruments directly
instead of providing a graph.
Supported formats for specifying causal assumptions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Graph**: Provide a causal graph in either gml or dot format. Can be a text file
or a string.
* **Named variable sets**: Instead of the graph, provide variable names that
correspond to relevant categories, such as common causes, instrumental variables, effect
modifiers, frontdoor variables, etc.
Examples of how to instantiate a causal model are in the `Getting Started
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_
notebook.
.. i comment image:: causal_model.png
II. Identify a target estimand under the model
----------------------------------------------
Based on the causal graph, DoWhy finds all possible ways of identifying a desired causal effect based on
the graphical model. It uses graph-based criteria and do-calculus to find
potential ways find expressions that can identify the causal effect.
Supported identification criteria
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Back-door criterion
* Front-door criterion
* Instrumental Variables
* Mediation (Direct and indirect effect identification)
Different notebooks illustrate how to use these identification criteria. Check
out the `Simple Backdoor <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_confounder_example.ipynb>`_ notebook for the back-door criterion, and the `Simple IV <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-simple-iv-example.ipynb>`_ notebook for the instrumental variable criterion.
III. Estimate causal effect based on the identified estimand
------------------------------------------------------------
DoWhy supports methods based on both back-door criterion and instrumental
variables. It also provides a non-parametric confidence intervals and a permutation test for testing
the statistical significance of obtained estimate.
Supported estimation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Methods based on estimating the treatment assignment
* Propensity-based Stratification
* Propensity Score Matching
* Inverse Propensity Weighting
* Methods based on estimating the outcome model
* Linear Regression
* Generalized Linear Models
* Methods based on the instrumental variable equation
* Binary Instrument/Wald Estimator
* Two-stage least squares
* Regression discontinuity
* Methods for front-door criterion and general mediation
* Two-stage linear regression
Examples of using these methods are in the `Estimation methods
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_estimation_methods.ipynb>`_
notebook.
Using EconML and CausalML estimation methods in DoWhy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is easy to call external estimation methods using DoWhy. Currently we
support integrations with the `EconML <https://github.com/microsoft/econml>`_ and `CausalML <https://github.com/uber/causalml>`_ packages. Here's an example
of estimating conditional treatment effects using EconML's double machine
learning estimator.
.. code:: python
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
dml_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.econml.dml.DML",
control_value = 0,
treatment_value = 1,
target_units = lambda df: df["X0"]>1,
confidence_intervals=False,
method_params={
"init_params":{'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
'model_final':LassoCV(),
'featurizer':PolynomialFeatures(degree=1, include_bias=True)},
"fit_params":{}}
)
More examples are in the `Conditional Treatment Effects with DoWhy
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook.
IV. Refute the obtained estimate
-------------------------------------
Having access to multiple refutation methods to validate an effect estimate from a
causal estimator is
a key benefit of using DoWhy.
Supported refutation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Add Random Common Cause**: Does the estimation method change its estimate after
we add an independent random variable as a common cause to the dataset?
(*Hint: It should not*)
* **Placebo Treatment**: What happens to the estimated causal effect when we
replace the true treatment variable with an independent random variable?
(*Hint: the effect should go to zero*)
* **Dummy Outcome**: What happens to the estimated causal effect when we replace
the true outcome variable with an independent random variable? (*Hint: The
effect should go to zero*)
* **Simulated Outcome**: What happens to the estimated causal effect when we
replace the dataset with a simulated dataset based on a known data-generating
process closest to the given dataset? (*Hint: It should match the effect parameter
from the data-generating process*)
* **Add Unobserved Common Causes**: How sensitive is the effect estimate when we
add an additional common cause (confounder) to the dataset that is correlated
with the treatment and the outcome? (*Hint: It should not be too sensitive*)
* **Data Subsets Validation**: Does the estimated effect change significantly when
we replace the given dataset with a randomly selected subset? (*Hint: It
should not*)
* **Bootstrap Validation**: Does the estimated effect change significantly when we
replace the given dataset with bootstrapped samples from the same dataset? (*Hint: It should not*)
Examples of using refutation methods are in the `Refutations <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refuter_notebook.ipynb>`_ notebook. For an advanced refutation that uses a simulated dataset based on user-provided or learnt data-generating processes, check out the `Dummy Outcome Refuter <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_demo_dummy_outcome_refuter.ipynb>`_ notebook.
As a practical example, `this notebook <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refutation_testing.ipynb>`_ shows an application of refutation methods on evaluating effect estimators for the Infant Health and Development Program (IHDP) and Lalonde datasets.
Citing this package
====================
If you find DoWhy useful for your research work, please cite us as follows:
Amit Sharma, Emre Kiciman, et al. DoWhy: A Python package for causal inference. 2019. https://github.com/microsoft/dowhy
Bibtex::
@misc{dowhy,
author={Sharma, Amit and Kiciman, Emre and others},
title={Do{W}hy: {A Python package for causal inference}},
howpublished={https://github.com/microsoft/dowhy},
year={2019}
}
Alternatively, you can cite our Arxiv paper on DoWhy.
Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216
Bibtex::
@article{dowhypaper,
title={DoWhy: An End-to-End Library for Causal Inference},
author={Sharma, Amit and Kiciman, Emre},
journal={arXiv preprint arXiv:2011.04216},
year={2020}
}
And if you find the gcm package useful for your work, please also cite us as:
Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821
Bibtex::
@article{dowhy_gcm,
author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik},
title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models},
journal={arXiv preprint arXiv:2206.06821},
year={2022}
}
Roadmap
=======
The `projects <https://github.com/microsoft/dowhy/projects>`_ page lists the next steps for DoWhy. If you would like to contribute, have a look at the current projects. If you have a specific request for DoWhy, please `raise an issue <https://github.com/microsoft/dowhy/issues>`_.
Contributing
============
This project welcomes contributions and suggestions. For a guide to contributing and a list of all contributors, check out `CONTRIBUTING.md <https://github.com/microsoft/dowhy/blob/main/CONTRIBUTING.md>`_. Our contributor code of conduct is available [here](https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md). You can also join the DoWhy development channel on Discord: |discord|_
.. |discord| image:: https://img.shields.io/discord/818456847551168542
.. _discord: https://discord.gg/cSBGb3vsZb
| |BuildStatus|_ |PyPiVersion|_ |PythonSupport|_ |Downloads|_
.. |PyPiVersion| image:: https://img.shields.io/pypi/v/dowhy.svg
.. _PyPiVersion: https://pypi.org/project/dowhy/
.. |PythonSupport| image:: https://img.shields.io/pypi/pyversions/dowhy.svg
.. _PythonSupport: https://pypi.org/project/dowhy/
.. |BuildStatus| image:: https://github.com/microsoft/dowhy/workflows/Python%20package/badge.svg
.. _BuildStatus: https://github.com/microsoft/dowhy/actions
.. |Downloads| image:: https://pepy.tech/badge/dowhy
.. _Downloads: https://pepy.tech/project/dowhy
DoWhy | An end-to-end library for causal inference
===================================================
Introducing DoWhy and the 4 steps of causal inference | `Microsoft Research Blog <https://www.microsoft.com/en-us/research/blog/dowhy-a-library-for-causal-inference/>`_ | `Video Tutorial <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_ | `Arxiv Paper <https://arxiv.org/abs/2011.04216>`_ | `Arxiv Paper (GCM-extension) <https://arxiv.org/abs/2206.06821>`_ | `Slides <https://www2.slideshare.net/AmitSharma315/dowhy-an-endtoend-library-for-causal-inference>`_
Read the `docs <https://py-why.github.io/dowhy/>`_ | Try it online! |Binder|_
.. |Binder| image:: https://mybinder.org/badge_logo.svg
.. _Binder: https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F
**Case Studies using DoWhy**: `Hotel booking cancellations <https://towardsdatascience.com/beyond-predictive-models-the-causal-story-behind-hotel-booking-cancellations-d29e8558cbaf>`_ | `Effect of customer loyalty programs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_ | `Optimizing article headlines <https://medium.com/@akelleh/introducing-the-do-sampler-for-causal-inference-a3296ea9e78d>`_ | `Effect of home visits on infant health (IHDP) <https://towardsdatascience.com/implementing-causal-inference-a-key-step-towards-agi-de2cde8ea599>`_ | `Causes of customer churn/attrition <https://medium.com/geekculture/a-quickstart-for-causal-analysis-decision-making-with-dowhy-2ce2d4d1efa9>`_
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/dowhy-schematic.png
As computing systems are more frequently and more actively intervening in societally critical domains such as healthcare, education, and governance, it is critical to correctly predict and understand the causal effects of these interventions. Without an A/B test, conventional machine learning methods, built on pattern recognition and correlational analyses, are insufficient for decision-making.
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
For a quick introduction to causal inference, check out `amit-sharma/causal-inference-tutorial <https://github.com/amit-sharma/causal-inference-tutorial/>`_. We also gave a more comprehensive tutorial at the ACM Knowledge Discovery and Data Mining (`KDD 2018 <http://www.kdd.org/kdd2018/>`_) conference: `causalinference.gitlab.io/kdd-tutorial <http://causalinference.gitlab.io/kdd-tutorial/>`_. For an introduction to the four steps of causal inference and its implications for machine learning, you can access this video tutorial from Microsoft Research: `DoWhy Webinar <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_.
Documentation for DoWhy is available at `py-why.github.io/dowhy <https://py-why.github.io/dowhy/>`_.
.. i here comment toctree::
.. i here comment :maxdepth: 4
.. i here comment :caption: Contents:
.. contents:: **Contents**
News
-----
**2022.05.27**:
* **DoWhy now part of PyWhy**
We have moved DoWhy from microsoft/dowhy to py-why/dowhy. While GitHub will automatically
redirect your git command for cloning, pulling, etc., we recommend updating git remotes and bookmarks. Please note
that the **documentation** has now moved to https://py-why.github.io/dowhy with **no** redirect from the old URL.
* **Experimental support for GCM-based inference**
We have started adding support for graphical causal model-based inference (or in short GCM-based). At the moment,
this includes support for interventions, counterfactuals, and attributing distribution changes. As part of this,
we also added features for Shapley value estimation and independence tests. We're still in the process of fleshing
everything out, including `documentation <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. Some of it is already on `main
<https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_, other parts are on feature branches (prefixed with ``gcm-``) with open
pull-requests, other parts will appear as new pull-requests in the next couple of weeks. Be sure to watch this space
here as we quickly expand functionality and documentation.
The need for causal inference
----------------------------------
Predictive models uncover patterns that connect the inputs and outcome in observed data. To intervene, however, we need to estimate the effect of changing an input from its current value, for which no data exists. Such questions, involving estimating a *counterfactual*, are common in decision-making scenarios.
* Will it work?
* Does a proposed change to a system improve people's outcomes?
* Why did it work?
* What led to a change in a system's outcome?
* What should we do?
* What changes to a system are likely to improve outcomes for people?
* What are the overall effects?
* How does the system interact with human behavior?
* What is the effect of a system's recommendations on people's activity?
Answering these questions requires causal reasoning. While many methods exist
for causal inference, it is hard to compare their assumptions and robustness of results. DoWhy makes three contributions,
1. Provides a principled way of modeling a given problem as a causal graph so
that all assumptions are explicit.
2. Provides a unified interface for many popular causal inference methods, combining the two major frameworks of graphical models and potential outcomes.
3. Automatically tests for the validity of assumptions if possible and assesses
the robustness of the estimate to violations.
To see DoWhy in action, check out how it can be applied to estimate the effect
of a subscription or rewards program for customers [`Rewards notebook
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_] and for implementing and evaluating causal inference methods on benchmark datasets like the `Infant Health and Development Program (IHDP) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_ihdp_data_example.ipynb>`_ dataset, `Infant Mortality (Twins) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_twins_example.ipynb>`_ dataset, and the `Lalonde Jobs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_lalonde_example.ipynb>`_ dataset.
Installation
-------------
DoWhy support Python 3.6+. To install, you can use pip or conda.
**Latest Release**
Install the latest `release <https://pypi.org/project/dowhy/>`__ using pip.
.. code:: shell
pip install dowhy
Install the latest `release <https://anaconda.org/conda-forge/dowhy>`__ using conda.
.. code:: shell
conda install -c conda-forge dowhy
If you face "Solving environment" problems with conda, then try :code:`conda update --all` and then install dowhy. If that does not work, then use :code:`conda config --set channel_priority false` and try to install again. If the problem persists, please `add your issue here <https://github.com/microsoft/dowhy/issues/197>`_.
**Development Version**
If you prefer the latest dev version, clone this repository and run the following command from the top-most folder of
the repository.
.. code:: shell
pip install -e .
**Requirements**
DoWhy requires the following packages:
* numpy
* scipy
* scikit-learn
* pandas
* networkx (for analyzing causal graphs)
* matplotlib (for general plotting)
* sympy (for rendering symbolic expressions)
If you face any problems, try installing dependencies manually.
.. code:: shell
pip install -r requirements.txt
Optionally, if you wish to input graphs in the dot format, then install pydot (or pygraphviz).
For better-looking graphs, you can optionally install pygraphviz. To proceed,
first install graphviz and then pygraphviz (on Ubuntu and Ubuntu WSL).
.. code:: shell
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
## from https://github.com/pygraphviz/pygraphviz/issues/71
pip install pygraphviz --install-option="--include-path=/usr/include/graphviz" \
--install-option="--library-path=/usr/lib/graphviz/"
Sample causal inference analysis in DoWhy
-------------------------------------------
Most DoWhy
analyses for causal inference take 4 lines to write, assuming a
pandas dataframe df that contains the data:
.. code:: python
from dowhy import CausalModel
import dowhy.datasets
# Load some sample data
data = dowhy.datasets.linear_dataset(
beta=10,
num_common_causes=5,
num_instruments=2,
num_samples=10000,
treatment_is_binary=True)
DoWhy supports two formats for providing the causal graph: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and `dot <http://www.graphviz.org/documentation/>`_. After loading in the data, we use the four main operations in DoWhy: *model*,
*estimate*, *identify* and *refute*:
.. code:: python
# I. Create a causal model from the data and given graph.
model = CausalModel(
data=data["df"],
treatment=data["treatment_name"],
outcome=data["outcome_name"],
graph=data["gml_graph"])
# II. Identify causal effect and return target estimands
identified_estimand = model.identify_effect()
# III. Estimate the target estimand using a statistical method.
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching")
# IV. Refute the obtained estimate using multiple robustness checks.
refute_results = model.refute_estimate(identified_estimand, estimate,
method_name="random_common_cause")
DoWhy stresses on the interpretability of its output. At any point in the analysis,
you can inspect the untested assumptions, identified estimands (if any) and the
estimate (if any). Here's a sample output of the linear regression estimator.
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/regression_output.png
For a full code example, check out the `Getting Started with DoWhy <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_ notebook. You can also use Conditional Average Treatment Effect (CATE) estimation methods from other libraries such as EconML and CausalML, as shown in the `Conditional Treatment Effects <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook. For more examples of using DoWhy, check out the Jupyter notebooks in `docs/source/example_notebooks <https://github.com/microsoft/dowhy/tree/main/docs/source/example_notebooks/>`_ or try them online at `Binder <https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F>`_.
GCM-based inference (experimental)
----------------------------------
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy. For
details, check out the `documentation for the gcm sub-package <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. The basic
recipe for this API works as follows:
.. code:: python
# 1. Modeling cause-effect relationships as a structural causal model
# (causal graph + functional causal models):
scm = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
scm.set_causal_mechanism('X', gcm.EmpiricalDistribution())
scm.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
scm.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
# 2. Fitting the SCM to the data:
gcm.fit(scm, data)
# 3. Answering a causal query based on the SCM:
results = gcm.<causal_query>(scm, ...)
Where <causal_query> can be one of multiple functions explained in `Answering Causal Questions <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/answering_causal_questions/index.html>`_.
A high-level Pandas API
-----------------------
We've made an even simpler API for dowhy which is a light layer on top of the standard one. The goal is to make causal analysis much more like regular exploratory analysis. To use this API, simply
import :code:`dowhy.api`. This will magically add the :code:`causal` namespace to your
:code:`pandas.DataFrame` s. Then,
you can use the namespace as follows.
.. code:: python
import dowhy.api
import dowhy.datasets
data = dowhy.datasets.linear_dataset(beta=5,
num_common_causes=1,
num_instruments = 0,
num_samples=1000,
treatment_is_binary=True)
# data['df'] is just a regular pandas.DataFrame
data['df'].causal.do(x='v0', # name of treatment variable
variable_types={'v0': 'b', 'y': 'c', 'W0': 'c'},
outcome='y',
common_causes=['W0']).groupby('v0').mean().plot(y='y', kind='bar')
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/do_barplot.png
For some methods, the :code:`variable_types` field must be specified. It should be a :code:`dict`, where the keys are
variable names, and values are 'o' for ordered discrete, 'u' for un-ordered discrete, 'd' for discrete, or 'c'
for continuous.
**Note:If the** :code:`variable_types` **is not specified we make use of the following implicit conversions:**
::
int -> 'c'
float -> 'c'
binary -> 'b'
category -> 'd'
**Currently we have not added support for timestamps.**
The :code:`do` method in the causal namespace generates a random sample from $P(outcome|do(X=x))$ of the
same length as your data set, and returns this outcome as a new :code:`DataFrame`. You can continue to perform
the usual :code:`DataFrame` operations with this sample, and so you can compute statistics and create plots
for causal outcomes!
The :code:`do` method is built on top of the lower-level :code:`dowhy` objects, so can still take a graph and perform
identification automatically when you provide a graph instead of :code:`common_causes`.
For more details, check out the `Pandas API
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_causal_api.ipynb>`_ notebook or the `Do Sampler <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/do_sampler_demo.ipynb>`_
notebook.
Graphical Models and Potential Outcomes: Best of both worlds
============================================================
DoWhy builds on two of the most powerful frameworks for causal inference:
graphical models and potential outcomes. It uses graph-based criteria and
do-calculus for modeling assumptions and identifying a non-parametric causal effect.
For estimation, it switches to methods based primarily on potential outcomes.
A unifying language for causal inference
----------------------------------------
DoWhy is based on a simple unifying language for causal inference. Causal
inference may seem tricky, but almost all methods follow four key steps:
1. Model a causal inference problem using assumptions.
2. Identify an expression for the causal effect under these assumptions ("causal estimand").
3. Estimate the expression using statistical methods such as matching or instrumental variables.
4. Finally, verify the validity of the estimate using a variety of robustness checks.
This workflow can be captured by four key verbs in DoWhy:
- model
- identify
- estimate
- refute
Using these verbs, DoWhy implements a causal inference engine that can support
a variety of methods. *model* encodes prior knowledge as a formal causal graph, *identify* uses
graph-based methods to identify the causal effect, *estimate* uses
statistical methods for estimating the identified estimand, and finally *refute*
tries to refute the obtained estimate by testing robustness to assumptions.
Key differences compared to available causal inference software
----------------------------------------------------------------
DoWhy brings three key differences compared to available software for causal inference:
**Explicit identifying assumptions**
Assumptions are first-class citizens in DoWhy.
Each analysis starts with a
building a causal model. The assumptions can be viewed graphically or in terms
of conditional independence statements. Wherever possible, DoWhy can also
automatically test for stated assumptions using observed data.
**Separation between identification and estimation**
Identification is the causal problem. Estimation is simply a statistical problem.
DoWhy
respects this boundary and treats them separately. This focuses the causal
inference effort on identification, and frees up estimation using any
available statistical estimator for a target estimand. In addition, multiple
estimation methods can be used for a single identified_estimand and
vice-versa.
**Automated robustness checks**
What happens when key identifying assumptions may not be satisfied?
The most critical, and often skipped, part of causal analysis is checking the
robustness of an estimate to unverified assumptions. DoWhy makes it easy to
automatically run sensitivity and robustness checks on the obtained estimate.
Finally, DoWhy is easily extensible, allowing other implementations of the
four verbs to co-exist (e.g., we support implementations of the *estimation* verb from
EconML and CausalML libraries). The four verbs are mutually independent, so their
implementations can be combined in any way.
Below are more details about the current implementation of each of these verbs.
Four steps of causal inference
===============================
I. Model a causal problem
-----------------------------
DoWhy creates an underlying causal graphical model for each problem. This
serves to make each causal assumption explicit. This graph need not be
complete---you can provide a partial graph, representing prior
knowledge about some of the variables. DoWhy automatically considers the rest
of the variables as potential confounders.
Currently, DoWhy supports two formats for graph input: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and
`dot <http://www.graphviz.org/documentation/>`_. We strongly suggest to use gml as the input format, as it works well with networkx. You can provide the graph either as a .gml file or as a string. If you prefer to use dot format, you will need to install additional packages (pydot or pygraphviz, see the installation section above). Both .dot files and string format are supported.
While not recommended, you can also specify common causes and/or instruments directly
instead of providing a graph.
Supported formats for specifying causal assumptions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Graph**: Provide a causal graph in either gml or dot format. Can be a text file
or a string.
* **Named variable sets**: Instead of the graph, provide variable names that
correspond to relevant categories, such as common causes, instrumental variables, effect
modifiers, frontdoor variables, etc.
Examples of how to instantiate a causal model are in the `Getting Started
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_
notebook.
.. i comment image:: causal_model.png
II. Identify a target estimand under the model
----------------------------------------------
Based on the causal graph, DoWhy finds all possible ways of identifying a desired causal effect based on
the graphical model. It uses graph-based criteria and do-calculus to find
potential ways find expressions that can identify the causal effect.
Supported identification criteria
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Back-door criterion
* Front-door criterion
* Instrumental Variables
* Mediation (Direct and indirect effect identification)
Different notebooks illustrate how to use these identification criteria. Check
out the `Simple Backdoor <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_confounder_example.ipynb>`_ notebook for the back-door criterion, and the `Simple IV <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-simple-iv-example.ipynb>`_ notebook for the instrumental variable criterion.
III. Estimate causal effect based on the identified estimand
------------------------------------------------------------
DoWhy supports methods based on both back-door criterion and instrumental
variables. It also provides a non-parametric confidence intervals and a permutation test for testing
the statistical significance of obtained estimate.
Supported estimation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Methods based on estimating the treatment assignment
* Propensity-based Stratification
* Propensity Score Matching
* Inverse Propensity Weighting
* Methods based on estimating the outcome model
* Linear Regression
* Generalized Linear Models
* Methods based on the instrumental variable equation
* Binary Instrument/Wald Estimator
* Two-stage least squares
* Regression discontinuity
* Methods for front-door criterion and general mediation
* Two-stage linear regression
Examples of using these methods are in the `Estimation methods
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_estimation_methods.ipynb>`_
notebook.
Using EconML and CausalML estimation methods in DoWhy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is easy to call external estimation methods using DoWhy. Currently we
support integrations with the `EconML <https://github.com/microsoft/econml>`_ and `CausalML <https://github.com/uber/causalml>`_ packages. Here's an example
of estimating conditional treatment effects using EconML's double machine
learning estimator.
.. code:: python
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
dml_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.econml.dml.DML",
control_value = 0,
treatment_value = 1,
target_units = lambda df: df["X0"]>1,
confidence_intervals=False,
method_params={
"init_params":{'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
'model_final':LassoCV(),
'featurizer':PolynomialFeatures(degree=1, include_bias=True)},
"fit_params":{}}
)
More examples are in the `Conditional Treatment Effects with DoWhy
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook.
IV. Refute the obtained estimate
-------------------------------------
Having access to multiple refutation methods to validate an effect estimate from a
causal estimator is
a key benefit of using DoWhy.
Supported refutation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Add Random Common Cause**: Does the estimation method change its estimate after
we add an independent random variable as a common cause to the dataset?
(*Hint: It should not*)
* **Placebo Treatment**: What happens to the estimated causal effect when we
replace the true treatment variable with an independent random variable?
(*Hint: the effect should go to zero*)
* **Dummy Outcome**: What happens to the estimated causal effect when we replace
the true outcome variable with an independent random variable? (*Hint: The
effect should go to zero*)
* **Simulated Outcome**: What happens to the estimated causal effect when we
replace the dataset with a simulated dataset based on a known data-generating
process closest to the given dataset? (*Hint: It should match the effect parameter
from the data-generating process*)
* **Add Unobserved Common Causes**: How sensitive is the effect estimate when we
add an additional common cause (confounder) to the dataset that is correlated
with the treatment and the outcome? (*Hint: It should not be too sensitive*)
* **Data Subsets Validation**: Does the estimated effect change significantly when
we replace the given dataset with a randomly selected subset? (*Hint: It
should not*)
* **Bootstrap Validation**: Does the estimated effect change significantly when we
replace the given dataset with bootstrapped samples from the same dataset? (*Hint: It should not*)
Examples of using refutation methods are in the `Refutations <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refuter_notebook.ipynb>`_ notebook. For an advanced refutation that uses a simulated dataset based on user-provided or learnt data-generating processes, check out the `Dummy Outcome Refuter <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_demo_dummy_outcome_refuter.ipynb>`_ notebook.
As a practical example, `this notebook <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refutation_testing.ipynb>`_ shows an application of refutation methods on evaluating effect estimators for the Infant Health and Development Program (IHDP) and Lalonde datasets.
Citing this package
====================
If you find DoWhy useful for your research work, please cite us as follows:
Amit Sharma, Emre Kiciman, et al. DoWhy: A Python package for causal inference. 2019. https://github.com/microsoft/dowhy
Bibtex::
@misc{dowhy,
author={Sharma, Amit and Kiciman, Emre and others},
title={Do{W}hy: {A Python package for causal inference}},
howpublished={https://github.com/microsoft/dowhy},
year={2019}
}
Alternatively, you can cite our Arxiv paper on DoWhy.
Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216
Bibtex::
@article{dowhypaper,
title={DoWhy: An End-to-End Library for Causal Inference},
author={Sharma, Amit and Kiciman, Emre},
journal={arXiv preprint arXiv:2011.04216},
year={2020}
}
And if you find the gcm package useful for your work, please also cite us as:
Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821
Bibtex::
@article{dowhy_gcm,
author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik},
title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models},
journal={arXiv preprint arXiv:2206.06821},
year={2022}
}
Roadmap
=======
The `projects <https://github.com/microsoft/dowhy/projects>`_ page lists the next steps for DoWhy. If you would like to contribute, have a look at the current projects. If you have a specific request for DoWhy, please `raise an issue <https://github.com/microsoft/dowhy/issues>`_.
Contributing
============
This project welcomes contributions and suggestions. For a guide to contributing and a list of all contributors, check out `CONTRIBUTING.md <https://github.com/microsoft/dowhy/blob/main/CONTRIBUTING.md>`_ and our `docs for contributing code <https://github.com/py-why/dowhy/blob/main/docs/source/contributing/contributing-code.rst>`_. Our `contributor code of conduct is available here <https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md>`_. You can also join the DoWhy development channel on Discord: |discord|_
.. |discord| image:: https://img.shields.io/discord/818456847551168542
.. _discord: https://discord.gg/cSBGb3vsZb
| emrekiciman | c74b024a16ef0634422ffdda2c715031f46309bb | 81fe5d9cbd998796f02d0d9f5b1d923ba0e85383 | @emrekiciman There are `_`s missing after the ``>` ``s. rst syntax is a bit of a pain... | petergtz | 315 |
py-why/dowhy | 641 | Update README.rst | Added direct link to https://github.com/py-why/dowhy/blob/main/docs/source/contributing/contributing-code.rst that has detailed instructions for code contribution steps
Signed-off-by: emrekiciman <[email protected]> | null | 2022-09-17 01:02:37+00:00 | 2022-09-22 18:33:00+00:00 | README.rst | |BuildStatus|_ |PyPiVersion|_ |PythonSupport|_ |Downloads|_
.. |PyPiVersion| image:: https://img.shields.io/pypi/v/dowhy.svg
.. _PyPiVersion: https://pypi.org/project/dowhy/
.. |PythonSupport| image:: https://img.shields.io/pypi/pyversions/dowhy.svg
.. _PythonSupport: https://pypi.org/project/dowhy/
.. |BuildStatus| image:: https://github.com/microsoft/dowhy/workflows/Python%20package/badge.svg
.. _BuildStatus: https://github.com/microsoft/dowhy/actions
.. |Downloads| image:: https://pepy.tech/badge/dowhy
.. _Downloads: https://pepy.tech/project/dowhy
DoWhy | An end-to-end library for causal inference
===================================================
Introducing DoWhy and the 4 steps of causal inference | `Microsoft Research Blog <https://www.microsoft.com/en-us/research/blog/dowhy-a-library-for-causal-inference/>`_ | `Video Tutorial <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_ | `Arxiv Paper <https://arxiv.org/abs/2011.04216>`_ | `Arxiv Paper (GCM-extension) <https://arxiv.org/abs/2206.06821>`_ | `Slides <https://www2.slideshare.net/AmitSharma315/dowhy-an-endtoend-library-for-causal-inference>`_
Read the `docs <https://py-why.github.io/dowhy/>`_ | Try it online! |Binder|_
.. |Binder| image:: https://mybinder.org/badge_logo.svg
.. _Binder: https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F
**Case Studies using DoWhy**: `Hotel booking cancellations <https://towardsdatascience.com/beyond-predictive-models-the-causal-story-behind-hotel-booking-cancellations-d29e8558cbaf>`_ | `Effect of customer loyalty programs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_ | `Optimizing article headlines <https://medium.com/@akelleh/introducing-the-do-sampler-for-causal-inference-a3296ea9e78d>`_ | `Effect of home visits on infant health (IHDP) <https://towardsdatascience.com/implementing-causal-inference-a-key-step-towards-agi-de2cde8ea599>`_ | `Causes of customer churn/attrition <https://medium.com/geekculture/a-quickstart-for-causal-analysis-decision-making-with-dowhy-2ce2d4d1efa9>`_
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/dowhy-schematic.png
As computing systems are more frequently and more actively intervening in societally critical domains such as healthcare, education, and governance, it is critical to correctly predict and understand the causal effects of these interventions. Without an A/B test, conventional machine learning methods, built on pattern recognition and correlational analyses, are insufficient for decision-making.
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
For a quick introduction to causal inference, check out `amit-sharma/causal-inference-tutorial <https://github.com/amit-sharma/causal-inference-tutorial/>`_. We also gave a more comprehensive tutorial at the ACM Knowledge Discovery and Data Mining (`KDD 2018 <http://www.kdd.org/kdd2018/>`_) conference: `causalinference.gitlab.io/kdd-tutorial <http://causalinference.gitlab.io/kdd-tutorial/>`_. For an introduction to the four steps of causal inference and its implications for machine learning, you can access this video tutorial from Microsoft Research: `DoWhy Webinar <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_.
Documentation for DoWhy is available at `py-why.github.io/dowhy <https://py-why.github.io/dowhy/>`_.
.. i here comment toctree::
.. i here comment :maxdepth: 4
.. i here comment :caption: Contents:
.. contents:: **Contents**
News
-----
**2022.05.27**:
* **DoWhy now part of PyWhy**
We have moved DoWhy from microsoft/dowhy to py-why/dowhy. While GitHub will automatically
redirect your git command for cloning, pulling, etc., we recommend updating git remotes and bookmarks. Please note
that the **documentation** has now moved to https://py-why.github.io/dowhy with **no** redirect from the old URL.
* **Experimental support for GCM-based inference**
We have started adding support for graphical causal model-based inference (or in short GCM-based). At the moment,
this includes support for interventions, counterfactuals, and attributing distribution changes. As part of this,
we also added features for Shapley value estimation and independence tests. We're still in the process of fleshing
everything out, including `documentation <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. Some of it is already on `main
<https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_, other parts are on feature branches (prefixed with ``gcm-``) with open
pull-requests, other parts will appear as new pull-requests in the next couple of weeks. Be sure to watch this space
here as we quickly expand functionality and documentation.
The need for causal inference
----------------------------------
Predictive models uncover patterns that connect the inputs and outcome in observed data. To intervene, however, we need to estimate the effect of changing an input from its current value, for which no data exists. Such questions, involving estimating a *counterfactual*, are common in decision-making scenarios.
* Will it work?
* Does a proposed change to a system improve people's outcomes?
* Why did it work?
* What led to a change in a system's outcome?
* What should we do?
* What changes to a system are likely to improve outcomes for people?
* What are the overall effects?
* How does the system interact with human behavior?
* What is the effect of a system's recommendations on people's activity?
Answering these questions requires causal reasoning. While many methods exist
for causal inference, it is hard to compare their assumptions and robustness of results. DoWhy makes three contributions,
1. Provides a principled way of modeling a given problem as a causal graph so
that all assumptions are explicit.
2. Provides a unified interface for many popular causal inference methods, combining the two major frameworks of graphical models and potential outcomes.
3. Automatically tests for the validity of assumptions if possible and assesses
the robustness of the estimate to violations.
To see DoWhy in action, check out how it can be applied to estimate the effect
of a subscription or rewards program for customers [`Rewards notebook
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_] and for implementing and evaluating causal inference methods on benchmark datasets like the `Infant Health and Development Program (IHDP) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_ihdp_data_example.ipynb>`_ dataset, `Infant Mortality (Twins) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_twins_example.ipynb>`_ dataset, and the `Lalonde Jobs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_lalonde_example.ipynb>`_ dataset.
Installation
-------------
DoWhy support Python 3.6+. To install, you can use pip or conda.
**Latest Release**
Install the latest `release <https://pypi.org/project/dowhy/>`__ using pip.
.. code:: shell
pip install dowhy
Install the latest `release <https://anaconda.org/conda-forge/dowhy>`__ using conda.
.. code:: shell
conda install -c conda-forge dowhy
If you face "Solving environment" problems with conda, then try :code:`conda update --all` and then install dowhy. If that does not work, then use :code:`conda config --set channel_priority false` and try to install again. If the problem persists, please add your issue `here <https://github.com/microsoft/dowhy/issues/197>`_.
**Development Version**
If you prefer the latest dev version, clone this repository and run the following command from the top-most folder of
the repository.
.. code:: shell
pip install -e .
**Requirements**
DoWhy requires the following packages:
* numpy
* scipy
* scikit-learn
* pandas
* networkx (for analyzing causal graphs)
* matplotlib (for general plotting)
* sympy (for rendering symbolic expressions)
If you face any problems, try installing dependencies manually.
.. code:: shell
pip install -r requirements.txt
Optionally, if you wish to input graphs in the dot format, then install pydot (or pygraphviz).
For better-looking graphs, you can optionally install pygraphviz. To proceed,
first install graphviz and then pygraphviz (on Ubuntu and Ubuntu WSL).
.. code:: shell
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
## from https://github.com/pygraphviz/pygraphviz/issues/71
pip install pygraphviz --install-option="--include-path=/usr/include/graphviz" \
--install-option="--library-path=/usr/lib/graphviz/"
Sample causal inference analysis in DoWhy
-------------------------------------------
Most DoWhy
analyses for causal inference take 4 lines to write, assuming a
pandas dataframe df that contains the data:
.. code:: python
from dowhy import CausalModel
import dowhy.datasets
# Load some sample data
data = dowhy.datasets.linear_dataset(
beta=10,
num_common_causes=5,
num_instruments=2,
num_samples=10000,
treatment_is_binary=True)
DoWhy supports two formats for providing the causal graph: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and `dot <http://www.graphviz.org/documentation/>`_. After loading in the data, we use the four main operations in DoWhy: *model*,
*estimate*, *identify* and *refute*:
.. code:: python
# I. Create a causal model from the data and given graph.
model = CausalModel(
data=data["df"],
treatment=data["treatment_name"],
outcome=data["outcome_name"],
graph=data["gml_graph"])
# II. Identify causal effect and return target estimands
identified_estimand = model.identify_effect()
# III. Estimate the target estimand using a statistical method.
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching")
# IV. Refute the obtained estimate using multiple robustness checks.
refute_results = model.refute_estimate(identified_estimand, estimate,
method_name="random_common_cause")
DoWhy stresses on the interpretability of its output. At any point in the analysis,
you can inspect the untested assumptions, identified estimands (if any) and the
estimate (if any). Here's a sample output of the linear regression estimator.
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/regression_output.png
For a full code example, check out the `Getting Started with DoWhy <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_ notebook. You can also use Conditional Average Treatment Effect (CATE) estimation methods from other libraries such as EconML and CausalML, as shown in the `Conditional Treatment Effects <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook. For more examples of using DoWhy, check out the Jupyter notebooks in `docs/source/example_notebooks <https://github.com/microsoft/dowhy/tree/main/docs/source/example_notebooks/>`_ or try them online at `Binder <https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F>`_.
GCM-based inference (experimental)
----------------------------------
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy. For
details, check out the `documentation for the gcm sub-package <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. The basic
recipe for this API works as follows:
.. code:: python
# 1. Modeling cause-effect relationships as a structural causal model
# (causal graph + functional causal models):
scm = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
scm.set_causal_mechanism('X', gcm.EmpiricalDistribution())
scm.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
scm.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
# 2. Fitting the SCM to the data:
gcm.fit(scm, data)
# 3. Answering a causal query based on the SCM:
results = gcm.<causal_query>(scm, ...)
Where <causal_query> can be one of multiple functions explained in `Answering Causal Questions <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/answering_causal_questions/index.html>`_.
A high-level Pandas API
-----------------------
We've made an even simpler API for dowhy which is a light layer on top of the standard one. The goal is to make causal analysis much more like regular exploratory analysis. To use this API, simply
import :code:`dowhy.api`. This will magically add the :code:`causal` namespace to your
:code:`pandas.DataFrame` s. Then,
you can use the namespace as follows.
.. code:: python
import dowhy.api
import dowhy.datasets
data = dowhy.datasets.linear_dataset(beta=5,
num_common_causes=1,
num_instruments = 0,
num_samples=1000,
treatment_is_binary=True)
# data['df'] is just a regular pandas.DataFrame
data['df'].causal.do(x='v0', # name of treatment variable
variable_types={'v0': 'b', 'y': 'c', 'W0': 'c'},
outcome='y',
common_causes=['W0']).groupby('v0').mean().plot(y='y', kind='bar')
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/do_barplot.png
For some methods, the :code:`variable_types` field must be specified. It should be a :code:`dict`, where the keys are
variable names, and values are 'o' for ordered discrete, 'u' for un-ordered discrete, 'd' for discrete, or 'c'
for continuous.
**Note:If the** :code:`variable_types` **is not specified we make use of the following implicit conversions:**
::
int -> 'c'
float -> 'c'
binary -> 'b'
category -> 'd'
**Currently we have not added support for timestamps.**
The :code:`do` method in the causal namespace generates a random sample from $P(outcome|do(X=x))$ of the
same length as your data set, and returns this outcome as a new :code:`DataFrame`. You can continue to perform
the usual :code:`DataFrame` operations with this sample, and so you can compute statistics and create plots
for causal outcomes!
The :code:`do` method is built on top of the lower-level :code:`dowhy` objects, so can still take a graph and perform
identification automatically when you provide a graph instead of :code:`common_causes`.
For more details, check out the `Pandas API
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_causal_api.ipynb>`_ notebook or the `Do Sampler <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/do_sampler_demo.ipynb>`_
notebook.
Graphical Models and Potential Outcomes: Best of both worlds
============================================================
DoWhy builds on two of the most powerful frameworks for causal inference:
graphical models and potential outcomes. It uses graph-based criteria and
do-calculus for modeling assumptions and identifying a non-parametric causal effect.
For estimation, it switches to methods based primarily on potential outcomes.
A unifying language for causal inference
----------------------------------------
DoWhy is based on a simple unifying language for causal inference. Causal
inference may seem tricky, but almost all methods follow four key steps:
1. Model a causal inference problem using assumptions.
2. Identify an expression for the causal effect under these assumptions ("causal estimand").
3. Estimate the expression using statistical methods such as matching or instrumental variables.
4. Finally, verify the validity of the estimate using a variety of robustness checks.
This workflow can be captured by four key verbs in DoWhy:
- model
- identify
- estimate
- refute
Using these verbs, DoWhy implements a causal inference engine that can support
a variety of methods. *model* encodes prior knowledge as a formal causal graph, *identify* uses
graph-based methods to identify the causal effect, *estimate* uses
statistical methods for estimating the identified estimand, and finally *refute*
tries to refute the obtained estimate by testing robustness to assumptions.
Key differences compared to available causal inference software
----------------------------------------------------------------
DoWhy brings three key differences compared to available software for causal inference:
**Explicit identifying assumptions**
Assumptions are first-class citizens in DoWhy.
Each analysis starts with a
building a causal model. The assumptions can be viewed graphically or in terms
of conditional independence statements. Wherever possible, DoWhy can also
automatically test for stated assumptions using observed data.
**Separation between identification and estimation**
Identification is the causal problem. Estimation is simply a statistical problem.
DoWhy
respects this boundary and treats them separately. This focuses the causal
inference effort on identification, and frees up estimation using any
available statistical estimator for a target estimand. In addition, multiple
estimation methods can be used for a single identified_estimand and
vice-versa.
**Automated robustness checks**
What happens when key identifying assumptions may not be satisfied?
The most critical, and often skipped, part of causal analysis is checking the
robustness of an estimate to unverified assumptions. DoWhy makes it easy to
automatically run sensitivity and robustness checks on the obtained estimate.
Finally, DoWhy is easily extensible, allowing other implementations of the
four verbs to co-exist (e.g., we support implementations of the *estimation* verb from
EconML and CausalML libraries). The four verbs are mutually independent, so their
implementations can be combined in any way.
Below are more details about the current implementation of each of these verbs.
Four steps of causal inference
===============================
I. Model a causal problem
-----------------------------
DoWhy creates an underlying causal graphical model for each problem. This
serves to make each causal assumption explicit. This graph need not be
complete---you can provide a partial graph, representing prior
knowledge about some of the variables. DoWhy automatically considers the rest
of the variables as potential confounders.
Currently, DoWhy supports two formats for graph input: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and
`dot <http://www.graphviz.org/documentation/>`_. We strongly suggest to use gml as the input format, as it works well with networkx. You can provide the graph either as a .gml file or as a string. If you prefer to use dot format, you will need to install additional packages (pydot or pygraphviz, see the installation section above). Both .dot files and string format are supported.
While not recommended, you can also specify common causes and/or instruments directly
instead of providing a graph.
Supported formats for specifying causal assumptions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Graph**: Provide a causal graph in either gml or dot format. Can be a text file
or a string.
* **Named variable sets**: Instead of the graph, provide variable names that
correspond to relevant categories, such as common causes, instrumental variables, effect
modifiers, frontdoor variables, etc.
Examples of how to instantiate a causal model are in the `Getting Started
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_
notebook.
.. i comment image:: causal_model.png
II. Identify a target estimand under the model
----------------------------------------------
Based on the causal graph, DoWhy finds all possible ways of identifying a desired causal effect based on
the graphical model. It uses graph-based criteria and do-calculus to find
potential ways find expressions that can identify the causal effect.
Supported identification criteria
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Back-door criterion
* Front-door criterion
* Instrumental Variables
* Mediation (Direct and indirect effect identification)
Different notebooks illustrate how to use these identification criteria. Check
out the `Simple Backdoor <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_confounder_example.ipynb>`_ notebook for the back-door criterion, and the `Simple IV <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-simple-iv-example.ipynb>`_ notebook for the instrumental variable criterion.
III. Estimate causal effect based on the identified estimand
------------------------------------------------------------
DoWhy supports methods based on both back-door criterion and instrumental
variables. It also provides a non-parametric confidence intervals and a permutation test for testing
the statistical significance of obtained estimate.
Supported estimation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Methods based on estimating the treatment assignment
* Propensity-based Stratification
* Propensity Score Matching
* Inverse Propensity Weighting
* Methods based on estimating the outcome model
* Linear Regression
* Generalized Linear Models
* Methods based on the instrumental variable equation
* Binary Instrument/Wald Estimator
* Two-stage least squares
* Regression discontinuity
* Methods for front-door criterion and general mediation
* Two-stage linear regression
Examples of using these methods are in the `Estimation methods
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_estimation_methods.ipynb>`_
notebook.
Using EconML and CausalML estimation methods in DoWhy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is easy to call external estimation methods using DoWhy. Currently we
support integrations with the `EconML <https://github.com/microsoft/econml>`_ and `CausalML <https://github.com/uber/causalml>`_ packages. Here's an example
of estimating conditional treatment effects using EconML's double machine
learning estimator.
.. code:: python
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
dml_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.econml.dml.DML",
control_value = 0,
treatment_value = 1,
target_units = lambda df: df["X0"]>1,
confidence_intervals=False,
method_params={
"init_params":{'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
'model_final':LassoCV(),
'featurizer':PolynomialFeatures(degree=1, include_bias=True)},
"fit_params":{}}
)
More examples are in the `Conditional Treatment Effects with DoWhy
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook.
IV. Refute the obtained estimate
-------------------------------------
Having access to multiple refutation methods to validate an effect estimate from a
causal estimator is
a key benefit of using DoWhy.
Supported refutation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Add Random Common Cause**: Does the estimation method change its estimate after
we add an independent random variable as a common cause to the dataset?
(*Hint: It should not*)
* **Placebo Treatment**: What happens to the estimated causal effect when we
replace the true treatment variable with an independent random variable?
(*Hint: the effect should go to zero*)
* **Dummy Outcome**: What happens to the estimated causal effect when we replace
the true outcome variable with an independent random variable? (*Hint: The
effect should go to zero*)
* **Simulated Outcome**: What happens to the estimated causal effect when we
replace the dataset with a simulated dataset based on a known data-generating
process closest to the given dataset? (*Hint: It should match the effect parameter
from the data-generating process*)
* **Add Unobserved Common Causes**: How sensitive is the effect estimate when we
add an additional common cause (confounder) to the dataset that is correlated
with the treatment and the outcome? (*Hint: It should not be too sensitive*)
* **Data Subsets Validation**: Does the estimated effect change significantly when
we replace the given dataset with a randomly selected subset? (*Hint: It
should not*)
* **Bootstrap Validation**: Does the estimated effect change significantly when we
replace the given dataset with bootstrapped samples from the same dataset? (*Hint: It should not*)
Examples of using refutation methods are in the `Refutations <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refuter_notebook.ipynb>`_ notebook. For an advanced refutation that uses a simulated dataset based on user-provided or learnt data-generating processes, check out the `Dummy Outcome Refuter <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_demo_dummy_outcome_refuter.ipynb>`_ notebook.
As a practical example, `this notebook <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refutation_testing.ipynb>`_ shows an application of refutation methods on evaluating effect estimators for the Infant Health and Development Program (IHDP) and Lalonde datasets.
Citing this package
====================
If you find DoWhy useful for your research work, please cite us as follows:
Amit Sharma, Emre Kiciman, et al. DoWhy: A Python package for causal inference. 2019. https://github.com/microsoft/dowhy
Bibtex::
@misc{dowhy,
author={Sharma, Amit and Kiciman, Emre and others},
title={Do{W}hy: {A Python package for causal inference}},
howpublished={https://github.com/microsoft/dowhy},
year={2019}
}
Alternatively, you can cite our Arxiv paper on DoWhy.
Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216
Bibtex::
@article{dowhypaper,
title={DoWhy: An End-to-End Library for Causal Inference},
author={Sharma, Amit and Kiciman, Emre},
journal={arXiv preprint arXiv:2011.04216},
year={2020}
}
And if you find the gcm package useful for your work, please also cite us as:
Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821
Bibtex::
@article{dowhy_gcm,
author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik},
title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models},
journal={arXiv preprint arXiv:2206.06821},
year={2022}
}
Roadmap
=======
The `projects <https://github.com/microsoft/dowhy/projects>`_ page lists the next steps for DoWhy. If you would like to contribute, have a look at the current projects. If you have a specific request for DoWhy, please `raise an issue <https://github.com/microsoft/dowhy/issues>`_.
Contributing
============
This project welcomes contributions and suggestions. For a guide to contributing and a list of all contributors, check out `CONTRIBUTING.md <https://github.com/microsoft/dowhy/blob/main/CONTRIBUTING.md>`_. Our contributor code of conduct is available [here](https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md). You can also join the DoWhy development channel on Discord: |discord|_
.. |discord| image:: https://img.shields.io/discord/818456847551168542
.. _discord: https://discord.gg/cSBGb3vsZb
| |BuildStatus|_ |PyPiVersion|_ |PythonSupport|_ |Downloads|_
.. |PyPiVersion| image:: https://img.shields.io/pypi/v/dowhy.svg
.. _PyPiVersion: https://pypi.org/project/dowhy/
.. |PythonSupport| image:: https://img.shields.io/pypi/pyversions/dowhy.svg
.. _PythonSupport: https://pypi.org/project/dowhy/
.. |BuildStatus| image:: https://github.com/microsoft/dowhy/workflows/Python%20package/badge.svg
.. _BuildStatus: https://github.com/microsoft/dowhy/actions
.. |Downloads| image:: https://pepy.tech/badge/dowhy
.. _Downloads: https://pepy.tech/project/dowhy
DoWhy | An end-to-end library for causal inference
===================================================
Introducing DoWhy and the 4 steps of causal inference | `Microsoft Research Blog <https://www.microsoft.com/en-us/research/blog/dowhy-a-library-for-causal-inference/>`_ | `Video Tutorial <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_ | `Arxiv Paper <https://arxiv.org/abs/2011.04216>`_ | `Arxiv Paper (GCM-extension) <https://arxiv.org/abs/2206.06821>`_ | `Slides <https://www2.slideshare.net/AmitSharma315/dowhy-an-endtoend-library-for-causal-inference>`_
Read the `docs <https://py-why.github.io/dowhy/>`_ | Try it online! |Binder|_
.. |Binder| image:: https://mybinder.org/badge_logo.svg
.. _Binder: https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F
**Case Studies using DoWhy**: `Hotel booking cancellations <https://towardsdatascience.com/beyond-predictive-models-the-causal-story-behind-hotel-booking-cancellations-d29e8558cbaf>`_ | `Effect of customer loyalty programs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_ | `Optimizing article headlines <https://medium.com/@akelleh/introducing-the-do-sampler-for-causal-inference-a3296ea9e78d>`_ | `Effect of home visits on infant health (IHDP) <https://towardsdatascience.com/implementing-causal-inference-a-key-step-towards-agi-de2cde8ea599>`_ | `Causes of customer churn/attrition <https://medium.com/geekculture/a-quickstart-for-causal-analysis-decision-making-with-dowhy-2ce2d4d1efa9>`_
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/dowhy-schematic.png
As computing systems are more frequently and more actively intervening in societally critical domains such as healthcare, education, and governance, it is critical to correctly predict and understand the causal effects of these interventions. Without an A/B test, conventional machine learning methods, built on pattern recognition and correlational analyses, are insufficient for decision-making.
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
For a quick introduction to causal inference, check out `amit-sharma/causal-inference-tutorial <https://github.com/amit-sharma/causal-inference-tutorial/>`_. We also gave a more comprehensive tutorial at the ACM Knowledge Discovery and Data Mining (`KDD 2018 <http://www.kdd.org/kdd2018/>`_) conference: `causalinference.gitlab.io/kdd-tutorial <http://causalinference.gitlab.io/kdd-tutorial/>`_. For an introduction to the four steps of causal inference and its implications for machine learning, you can access this video tutorial from Microsoft Research: `DoWhy Webinar <https://note.microsoft.com/MSR-Webinar-DoWhy-Library-Registration-On-Demand.html>`_.
Documentation for DoWhy is available at `py-why.github.io/dowhy <https://py-why.github.io/dowhy/>`_.
.. i here comment toctree::
.. i here comment :maxdepth: 4
.. i here comment :caption: Contents:
.. contents:: **Contents**
News
-----
**2022.05.27**:
* **DoWhy now part of PyWhy**
We have moved DoWhy from microsoft/dowhy to py-why/dowhy. While GitHub will automatically
redirect your git command for cloning, pulling, etc., we recommend updating git remotes and bookmarks. Please note
that the **documentation** has now moved to https://py-why.github.io/dowhy with **no** redirect from the old URL.
* **Experimental support for GCM-based inference**
We have started adding support for graphical causal model-based inference (or in short GCM-based). At the moment,
this includes support for interventions, counterfactuals, and attributing distribution changes. As part of this,
we also added features for Shapley value estimation and independence tests. We're still in the process of fleshing
everything out, including `documentation <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. Some of it is already on `main
<https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_, other parts are on feature branches (prefixed with ``gcm-``) with open
pull-requests, other parts will appear as new pull-requests in the next couple of weeks. Be sure to watch this space
here as we quickly expand functionality and documentation.
The need for causal inference
----------------------------------
Predictive models uncover patterns that connect the inputs and outcome in observed data. To intervene, however, we need to estimate the effect of changing an input from its current value, for which no data exists. Such questions, involving estimating a *counterfactual*, are common in decision-making scenarios.
* Will it work?
* Does a proposed change to a system improve people's outcomes?
* Why did it work?
* What led to a change in a system's outcome?
* What should we do?
* What changes to a system are likely to improve outcomes for people?
* What are the overall effects?
* How does the system interact with human behavior?
* What is the effect of a system's recommendations on people's activity?
Answering these questions requires causal reasoning. While many methods exist
for causal inference, it is hard to compare their assumptions and robustness of results. DoWhy makes three contributions,
1. Provides a principled way of modeling a given problem as a causal graph so
that all assumptions are explicit.
2. Provides a unified interface for many popular causal inference methods, combining the two major frameworks of graphical models and potential outcomes.
3. Automatically tests for the validity of assumptions if possible and assesses
the robustness of the estimate to violations.
To see DoWhy in action, check out how it can be applied to estimate the effect
of a subscription or rewards program for customers [`Rewards notebook
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_example_effect_of_memberrewards_program.ipynb>`_] and for implementing and evaluating causal inference methods on benchmark datasets like the `Infant Health and Development Program (IHDP) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_ihdp_data_example.ipynb>`_ dataset, `Infant Mortality (Twins) <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_twins_example.ipynb>`_ dataset, and the `Lalonde Jobs <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_lalonde_example.ipynb>`_ dataset.
Installation
-------------
DoWhy support Python 3.6+. To install, you can use pip or conda.
**Latest Release**
Install the latest `release <https://pypi.org/project/dowhy/>`__ using pip.
.. code:: shell
pip install dowhy
Install the latest `release <https://anaconda.org/conda-forge/dowhy>`__ using conda.
.. code:: shell
conda install -c conda-forge dowhy
If you face "Solving environment" problems with conda, then try :code:`conda update --all` and then install dowhy. If that does not work, then use :code:`conda config --set channel_priority false` and try to install again. If the problem persists, please `add your issue here <https://github.com/microsoft/dowhy/issues/197>`_.
**Development Version**
If you prefer the latest dev version, clone this repository and run the following command from the top-most folder of
the repository.
.. code:: shell
pip install -e .
**Requirements**
DoWhy requires the following packages:
* numpy
* scipy
* scikit-learn
* pandas
* networkx (for analyzing causal graphs)
* matplotlib (for general plotting)
* sympy (for rendering symbolic expressions)
If you face any problems, try installing dependencies manually.
.. code:: shell
pip install -r requirements.txt
Optionally, if you wish to input graphs in the dot format, then install pydot (or pygraphviz).
For better-looking graphs, you can optionally install pygraphviz. To proceed,
first install graphviz and then pygraphviz (on Ubuntu and Ubuntu WSL).
.. code:: shell
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
## from https://github.com/pygraphviz/pygraphviz/issues/71
pip install pygraphviz --install-option="--include-path=/usr/include/graphviz" \
--install-option="--library-path=/usr/lib/graphviz/"
Sample causal inference analysis in DoWhy
-------------------------------------------
Most DoWhy
analyses for causal inference take 4 lines to write, assuming a
pandas dataframe df that contains the data:
.. code:: python
from dowhy import CausalModel
import dowhy.datasets
# Load some sample data
data = dowhy.datasets.linear_dataset(
beta=10,
num_common_causes=5,
num_instruments=2,
num_samples=10000,
treatment_is_binary=True)
DoWhy supports two formats for providing the causal graph: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and `dot <http://www.graphviz.org/documentation/>`_. After loading in the data, we use the four main operations in DoWhy: *model*,
*estimate*, *identify* and *refute*:
.. code:: python
# I. Create a causal model from the data and given graph.
model = CausalModel(
data=data["df"],
treatment=data["treatment_name"],
outcome=data["outcome_name"],
graph=data["gml_graph"])
# II. Identify causal effect and return target estimands
identified_estimand = model.identify_effect()
# III. Estimate the target estimand using a statistical method.
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching")
# IV. Refute the obtained estimate using multiple robustness checks.
refute_results = model.refute_estimate(identified_estimand, estimate,
method_name="random_common_cause")
DoWhy stresses on the interpretability of its output. At any point in the analysis,
you can inspect the untested assumptions, identified estimands (if any) and the
estimate (if any). Here's a sample output of the linear regression estimator.
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/regression_output.png
For a full code example, check out the `Getting Started with DoWhy <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_ notebook. You can also use Conditional Average Treatment Effect (CATE) estimation methods from other libraries such as EconML and CausalML, as shown in the `Conditional Treatment Effects <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook. For more examples of using DoWhy, check out the Jupyter notebooks in `docs/source/example_notebooks <https://github.com/microsoft/dowhy/tree/main/docs/source/example_notebooks/>`_ or try them online at `Binder <https://mybinder.org/v2/gh/microsoft/dowhy/main?filepath=docs%2Fsource%2F>`_.
GCM-based inference (experimental)
----------------------------------
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy. For
details, check out the `documentation for the gcm sub-package <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/index.html>`_. The basic
recipe for this API works as follows:
.. code:: python
# 1. Modeling cause-effect relationships as a structural causal model
# (causal graph + functional causal models):
scm = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
scm.set_causal_mechanism('X', gcm.EmpiricalDistribution())
scm.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
scm.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
# 2. Fitting the SCM to the data:
gcm.fit(scm, data)
# 3. Answering a causal query based on the SCM:
results = gcm.<causal_query>(scm, ...)
Where <causal_query> can be one of multiple functions explained in `Answering Causal Questions <https://py-why.github.io/dowhy/main/user_guide/gcm_based_inference/answering_causal_questions/index.html>`_.
A high-level Pandas API
-----------------------
We've made an even simpler API for dowhy which is a light layer on top of the standard one. The goal is to make causal analysis much more like regular exploratory analysis. To use this API, simply
import :code:`dowhy.api`. This will magically add the :code:`causal` namespace to your
:code:`pandas.DataFrame` s. Then,
you can use the namespace as follows.
.. code:: python
import dowhy.api
import dowhy.datasets
data = dowhy.datasets.linear_dataset(beta=5,
num_common_causes=1,
num_instruments = 0,
num_samples=1000,
treatment_is_binary=True)
# data['df'] is just a regular pandas.DataFrame
data['df'].causal.do(x='v0', # name of treatment variable
variable_types={'v0': 'b', 'y': 'c', 'W0': 'c'},
outcome='y',
common_causes=['W0']).groupby('v0').mean().plot(y='y', kind='bar')
.. image:: https://raw.githubusercontent.com/microsoft/dowhy/main/docs/images/do_barplot.png
For some methods, the :code:`variable_types` field must be specified. It should be a :code:`dict`, where the keys are
variable names, and values are 'o' for ordered discrete, 'u' for un-ordered discrete, 'd' for discrete, or 'c'
for continuous.
**Note:If the** :code:`variable_types` **is not specified we make use of the following implicit conversions:**
::
int -> 'c'
float -> 'c'
binary -> 'b'
category -> 'd'
**Currently we have not added support for timestamps.**
The :code:`do` method in the causal namespace generates a random sample from $P(outcome|do(X=x))$ of the
same length as your data set, and returns this outcome as a new :code:`DataFrame`. You can continue to perform
the usual :code:`DataFrame` operations with this sample, and so you can compute statistics and create plots
for causal outcomes!
The :code:`do` method is built on top of the lower-level :code:`dowhy` objects, so can still take a graph and perform
identification automatically when you provide a graph instead of :code:`common_causes`.
For more details, check out the `Pandas API
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_causal_api.ipynb>`_ notebook or the `Do Sampler <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/do_sampler_demo.ipynb>`_
notebook.
Graphical Models and Potential Outcomes: Best of both worlds
============================================================
DoWhy builds on two of the most powerful frameworks for causal inference:
graphical models and potential outcomes. It uses graph-based criteria and
do-calculus for modeling assumptions and identifying a non-parametric causal effect.
For estimation, it switches to methods based primarily on potential outcomes.
A unifying language for causal inference
----------------------------------------
DoWhy is based on a simple unifying language for causal inference. Causal
inference may seem tricky, but almost all methods follow four key steps:
1. Model a causal inference problem using assumptions.
2. Identify an expression for the causal effect under these assumptions ("causal estimand").
3. Estimate the expression using statistical methods such as matching or instrumental variables.
4. Finally, verify the validity of the estimate using a variety of robustness checks.
This workflow can be captured by four key verbs in DoWhy:
- model
- identify
- estimate
- refute
Using these verbs, DoWhy implements a causal inference engine that can support
a variety of methods. *model* encodes prior knowledge as a formal causal graph, *identify* uses
graph-based methods to identify the causal effect, *estimate* uses
statistical methods for estimating the identified estimand, and finally *refute*
tries to refute the obtained estimate by testing robustness to assumptions.
Key differences compared to available causal inference software
----------------------------------------------------------------
DoWhy brings three key differences compared to available software for causal inference:
**Explicit identifying assumptions**
Assumptions are first-class citizens in DoWhy.
Each analysis starts with a
building a causal model. The assumptions can be viewed graphically or in terms
of conditional independence statements. Wherever possible, DoWhy can also
automatically test for stated assumptions using observed data.
**Separation between identification and estimation**
Identification is the causal problem. Estimation is simply a statistical problem.
DoWhy
respects this boundary and treats them separately. This focuses the causal
inference effort on identification, and frees up estimation using any
available statistical estimator for a target estimand. In addition, multiple
estimation methods can be used for a single identified_estimand and
vice-versa.
**Automated robustness checks**
What happens when key identifying assumptions may not be satisfied?
The most critical, and often skipped, part of causal analysis is checking the
robustness of an estimate to unverified assumptions. DoWhy makes it easy to
automatically run sensitivity and robustness checks on the obtained estimate.
Finally, DoWhy is easily extensible, allowing other implementations of the
four verbs to co-exist (e.g., we support implementations of the *estimation* verb from
EconML and CausalML libraries). The four verbs are mutually independent, so their
implementations can be combined in any way.
Below are more details about the current implementation of each of these verbs.
Four steps of causal inference
===============================
I. Model a causal problem
-----------------------------
DoWhy creates an underlying causal graphical model for each problem. This
serves to make each causal assumption explicit. This graph need not be
complete---you can provide a partial graph, representing prior
knowledge about some of the variables. DoWhy automatically considers the rest
of the variables as potential confounders.
Currently, DoWhy supports two formats for graph input: `gml <https://github.com/GunterMueller/UNI_PASSAU_FMI_Graph_Drawing>`_ (preferred) and
`dot <http://www.graphviz.org/documentation/>`_. We strongly suggest to use gml as the input format, as it works well with networkx. You can provide the graph either as a .gml file or as a string. If you prefer to use dot format, you will need to install additional packages (pydot or pygraphviz, see the installation section above). Both .dot files and string format are supported.
While not recommended, you can also specify common causes and/or instruments directly
instead of providing a graph.
Supported formats for specifying causal assumptions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Graph**: Provide a causal graph in either gml or dot format. Can be a text file
or a string.
* **Named variable sets**: Instead of the graph, provide variable names that
correspond to relevant categories, such as common causes, instrumental variables, effect
modifiers, frontdoor variables, etc.
Examples of how to instantiate a causal model are in the `Getting Started
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_simple_example.ipynb>`_
notebook.
.. i comment image:: causal_model.png
II. Identify a target estimand under the model
----------------------------------------------
Based on the causal graph, DoWhy finds all possible ways of identifying a desired causal effect based on
the graphical model. It uses graph-based criteria and do-calculus to find
potential ways find expressions that can identify the causal effect.
Supported identification criteria
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Back-door criterion
* Front-door criterion
* Instrumental Variables
* Mediation (Direct and indirect effect identification)
Different notebooks illustrate how to use these identification criteria. Check
out the `Simple Backdoor <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_confounder_example.ipynb>`_ notebook for the back-door criterion, and the `Simple IV <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-simple-iv-example.ipynb>`_ notebook for the instrumental variable criterion.
III. Estimate causal effect based on the identified estimand
------------------------------------------------------------
DoWhy supports methods based on both back-door criterion and instrumental
variables. It also provides a non-parametric confidence intervals and a permutation test for testing
the statistical significance of obtained estimate.
Supported estimation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Methods based on estimating the treatment assignment
* Propensity-based Stratification
* Propensity Score Matching
* Inverse Propensity Weighting
* Methods based on estimating the outcome model
* Linear Regression
* Generalized Linear Models
* Methods based on the instrumental variable equation
* Binary Instrument/Wald Estimator
* Two-stage least squares
* Regression discontinuity
* Methods for front-door criterion and general mediation
* Two-stage linear regression
Examples of using these methods are in the `Estimation methods
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_estimation_methods.ipynb>`_
notebook.
Using EconML and CausalML estimation methods in DoWhy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is easy to call external estimation methods using DoWhy. Currently we
support integrations with the `EconML <https://github.com/microsoft/econml>`_ and `CausalML <https://github.com/uber/causalml>`_ packages. Here's an example
of estimating conditional treatment effects using EconML's double machine
learning estimator.
.. code:: python
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
dml_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.econml.dml.DML",
control_value = 0,
treatment_value = 1,
target_units = lambda df: df["X0"]>1,
confidence_intervals=False,
method_params={
"init_params":{'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
'model_final':LassoCV(),
'featurizer':PolynomialFeatures(degree=1, include_bias=True)},
"fit_params":{}}
)
More examples are in the `Conditional Treatment Effects with DoWhy
<https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy-conditional-treatment-effects.ipynb>`_ notebook.
IV. Refute the obtained estimate
-------------------------------------
Having access to multiple refutation methods to validate an effect estimate from a
causal estimator is
a key benefit of using DoWhy.
Supported refutation methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Add Random Common Cause**: Does the estimation method change its estimate after
we add an independent random variable as a common cause to the dataset?
(*Hint: It should not*)
* **Placebo Treatment**: What happens to the estimated causal effect when we
replace the true treatment variable with an independent random variable?
(*Hint: the effect should go to zero*)
* **Dummy Outcome**: What happens to the estimated causal effect when we replace
the true outcome variable with an independent random variable? (*Hint: The
effect should go to zero*)
* **Simulated Outcome**: What happens to the estimated causal effect when we
replace the dataset with a simulated dataset based on a known data-generating
process closest to the given dataset? (*Hint: It should match the effect parameter
from the data-generating process*)
* **Add Unobserved Common Causes**: How sensitive is the effect estimate when we
add an additional common cause (confounder) to the dataset that is correlated
with the treatment and the outcome? (*Hint: It should not be too sensitive*)
* **Data Subsets Validation**: Does the estimated effect change significantly when
we replace the given dataset with a randomly selected subset? (*Hint: It
should not*)
* **Bootstrap Validation**: Does the estimated effect change significantly when we
replace the given dataset with bootstrapped samples from the same dataset? (*Hint: It should not*)
Examples of using refutation methods are in the `Refutations <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refuter_notebook.ipynb>`_ notebook. For an advanced refutation that uses a simulated dataset based on user-provided or learnt data-generating processes, check out the `Dummy Outcome Refuter <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_demo_dummy_outcome_refuter.ipynb>`_ notebook.
As a practical example, `this notebook <https://github.com/microsoft/dowhy/blob/main/docs/source/example_notebooks/dowhy_refutation_testing.ipynb>`_ shows an application of refutation methods on evaluating effect estimators for the Infant Health and Development Program (IHDP) and Lalonde datasets.
Citing this package
====================
If you find DoWhy useful for your research work, please cite us as follows:
Amit Sharma, Emre Kiciman, et al. DoWhy: A Python package for causal inference. 2019. https://github.com/microsoft/dowhy
Bibtex::
@misc{dowhy,
author={Sharma, Amit and Kiciman, Emre and others},
title={Do{W}hy: {A Python package for causal inference}},
howpublished={https://github.com/microsoft/dowhy},
year={2019}
}
Alternatively, you can cite our Arxiv paper on DoWhy.
Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216
Bibtex::
@article{dowhypaper,
title={DoWhy: An End-to-End Library for Causal Inference},
author={Sharma, Amit and Kiciman, Emre},
journal={arXiv preprint arXiv:2011.04216},
year={2020}
}
And if you find the gcm package useful for your work, please also cite us as:
Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821
Bibtex::
@article{dowhy_gcm,
author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik},
title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models},
journal={arXiv preprint arXiv:2206.06821},
year={2022}
}
Roadmap
=======
The `projects <https://github.com/microsoft/dowhy/projects>`_ page lists the next steps for DoWhy. If you would like to contribute, have a look at the current projects. If you have a specific request for DoWhy, please `raise an issue <https://github.com/microsoft/dowhy/issues>`_.
Contributing
============
This project welcomes contributions and suggestions. For a guide to contributing and a list of all contributors, check out `CONTRIBUTING.md <https://github.com/microsoft/dowhy/blob/main/CONTRIBUTING.md>`_ and our `docs for contributing code <https://github.com/py-why/dowhy/blob/main/docs/source/contributing/contributing-code.rst>`_. Our `contributor code of conduct is available here <https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md>`_. You can also join the DoWhy development channel on Discord: |discord|_
.. |discord| image:: https://img.shields.io/discord/818456847551168542
.. _discord: https://discord.gg/cSBGb3vsZb
| emrekiciman | c74b024a16ef0634422ffdda2c715031f46309bb | 81fe5d9cbd998796f02d0d9f5b1d923ba0e85383 | Those spaces should be there now. | emrekiciman | 316 |
py-why/dowhy | 640 | Functional api/identify effect | First of a series of PR to add a functional API according to: https://github.com/py-why/dowhy/wiki/API-proposal-for-v1
* Refactor `identify_effect` to have a functional API
* Created `BackdoorIdentifier` class and extracted the logic from `CausalIdentifier` to be just a Protocol
* Refactor the `identify_effect` method of `BackdoorIdentifier` and `IDIdentifier` to take the graph as parameter
* Moved constants into `enums` for easier type checking
* Backwards compatible with previous CausalModel API
* Added notebook as demo that CausalModel API and new API behaves the same way | null | 2022-09-16 21:35:24+00:00 | 2022-09-27 01:38:27+00:00 | docs/source/example_notebooks/dowhy_efficient_backdoor_example.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Finding optimal adjustment sets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Preliminaries"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook illustrates the use of the algorithms developed in [Smucler, Sapienza and Rotnitzky (Biometrika, 2022)](https://arxiv.org/abs/1912.00306) and [Smucler and Rotnitzky (Journal of Causal Inference, 2022)](https://www.degruyter.com/document/doi/10.1515/jci-2022-0015/html) to compute backdoor sets that yield efficient estimators of interventional means and their contrasts (such as the ATE), under various constraints. We begin by recalling some definitions from these papers. We ommit most technical details, and point the reader to the original papers for them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The **optimal backdoor set** is a backdoor set comprised of observable variables that yields non-parametric\n",
"estimators of the interventional mean with the smallest asymptotic variance\n",
"among those that are based on observable backdoor sets. This optimal backdoor\n",
"set always exists when no variables are latent, and the algorithm is guaranteed to compute\n",
"it in this case. Under a non-parametric graphical model with latent variables,\n",
"such a backdoor set can fail to exist. \n",
"\n",
"The **optimal minimal backdoor set** is a minimal backdoor set comprised of observable variables that yields non-parametric\n",
"estimators of the interventional mean with the smallest asymptotic variance\n",
"among those that are based on observable minimal backdoor sets.\n",
"\n",
"The **optimal minimum cost backdoor set** is a minimum cost backdoor set comprised of observable variables that yields non-parametric estimators of the interventional mean with the smallest asymptotic variance\n",
"among those that are based on observable minimum cost backdoor sets. The cost\n",
"of a backdoor set is defined as the sum of the costs of the variables that comprise it. Note that \n",
"when all costs are equal, the optimal minimum cost backdoor set is the optimal backdoor set among those that \n",
"have minimum cardinality.\n",
"\n",
"These various optimal backdoor sets are not only optimal under\n",
"non-parametric graphical models and non-parametric estimators of interventional mean,\n",
"but also under linear graphical models and OLS estimators, per results in [Henckel, Perkovic\n",
"and Maathuis (JRSS B, 2022)](https://arxiv.org/abs/1907.02435)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The design of an observational study"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dowhy.causal_graph import CausalGraph\n",
"from dowhy.causal_identifier import CausalIdentifier"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Consider the design of the following hypothetical observational study discussed in [Shrier & Platt (2008)](https://doi.org/10.1186/1471-2288-8-70). The aim of the study is to assess the\n",
"effect of warm-up exercises on injury after playing sports. Suppose that a researcher postulates\n",
"that the graph below represents a causal graphical model. The node warm-up is the treatment variable, which stands for the type of exercise an athlete performs prior to playing sports,\n",
"and the node injury stands for the outcome variable. \n",
"\n",
"Suppose that the goal of the study is to estimate and\n",
"compare the interventional means corresponding to different individualised treatment rules. Each\n",
"rule prescribes the type of warm-up exercise as a function of previous injury and team motivation. For example, one such rule could be to allocate a patient to perform soft warm-up excercises when she has previous injury = 1 and team motivation > 6, but any other (possibly randomised) function of previous injury and team motivation to set the treatment variable could be of interest. More formally, the goal of the study is, for some set of policies such as the aforementioned one, to estimate the mean of the outcome, in a world in which all patients are allocated to a treatment variant according to one of these policies. We will suppose moreover that due to practical limitations, the variables genetics, pre-grame proprioception,\n",
"intra-game proprioception and tissue weakness cannot be measured. Proprioception is an individual's ability to sense the movement, action, and location of their own bodies.\n",
"\n",
"To build the graph, we first create a string declaring the graph's nodes and edges. We then create a list of all observable variables, in this case, all variables in the graph except genetics, pre-game proprioception, intra-game proprioception and tissue weakness. We then pass all this information to the ```CausalGraph``` class, to create an instance of it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graph_str=\"\"\"graph[directed 1 node[id \"coach\" label \"coach\"]\n",
" node[id \"team motivation\" label \"team motivation\"]\n",
" node[id \"fitness\" label \"fitness\"]\n",
" node[id \"pre-game prop\" label \"pre-game prop\"]\n",
" node[id \"intra-game prop\" label \"intra-game prop\"] \n",
" node[id \"neuromusc fatigue\" label \"neuromusc fatigue\"]\n",
" node[id \"warm-up\" label \"warm-up\"]\n",
" node[id \"previous injury\" label \"previous injury\"]\n",
" node[id \"contact sport\" label \"contact sport\"]\n",
" node[id \"genetics\" label \"genetics\"]\n",
" node[id \"injury\" label \"injury\"]\n",
" node[id \"tissue disorder\" label \"tissue disorder\"]\n",
" node[id \"tissue weakness\" label \"tissue weakness\"]\n",
" edge[source \"coach\" target \"team motivation\"]\n",
" edge[source \"coach\" target \"fitness\"]\n",
" edge[source \"fitness\" target \"pre-game prop\"]\n",
" edge[source \"fitness\" target \"neuromusc fatigue\"]\n",
" edge[source \"team motivation\" target \"warm-up\"]\n",
" edge[source \"team motivation\" target \"previous injury\"]\n",
" edge[source \"pre-game prop\" target \"warm-up\"]\n",
" edge[source \"warm-up\" target \"intra-game prop\"]\n",
" edge[source \"contact sport\" target \"previous injury\"]\n",
" edge[source \"contact sport\" target \"intra-game prop\"]\n",
" edge[source \"intra-game prop\" target \"injury\"]\n",
" edge[source \"genetics\" target \"fitness\"]\n",
" edge[source \"genetics\" target \"neuromusc fatigue\"]\n",
" edge[source \"genetics\" target \"tissue disorder\"]\n",
" edge[source \"tissue disorder\" target \"neuromusc fatigue\"]\n",
" edge[source \"tissue disorder\" target \"tissue weakness\"]\n",
" edge[source \"neuromusc fatigue\" target \"intra-game prop\"]\n",
" edge[source \"neuromusc fatigue\" target \"injury\"]\n",
" edge[source \"tissue weakness\" target \"injury\"]\n",
" ]\n",
"\"\"\"\n",
"observed_node_names=[\"coach\", \"team motivation\", \"fitness\", \"neuromusc fatigue\",\n",
" \"warm-up\", \"previous injury\", \"contact sport\", \"tissue disorder\", \"injury\"]\n",
"treatment_name = \"warm-up\"\n",
"outcome_name = \"injury\"\n",
"G = CausalGraph(graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name,\n",
" observed_node_names=observed_node_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can easily create a plot of the graph using the ```view_graph``` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"G.view_graph()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we illustrate how to compute the backdoor sets defined in the preliminaries section for the example graph above, using the ```CausalIdentifier``` class. To compute the optimal backdoor set, optimal minimal backdoor set and optimal minimum cost backdoor set, we need to instantiate objects of the ```CausalIdentifier``` class, passing as ```method_name``` the values \"efficient-adjustment\", \"efficient-minimal-adjustment\" and \"efficient-mincost-adjustment\" respectively. Then, we need to call the ```identify_effect``` method, passing as an argument a list of conditional nodes, that is, the nodes that would be used to decide how to allocate treatment. As discussed above, in this example these nodes are previous injury and team motivation. For settings in which we are not interested in individualized interventions, we can just pass an empty list as conditional nodes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conditional_node_names=[\"previous injury\", \"team motivation\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-adjustment\",\n",
" )\n",
"print(ident_eff.identify_effect(conditional_node_names=conditional_node_names))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Thus, the optimal backdoor set is formed by previous injury, neuromusc fatigue, team motivation, tissue disorder and contact sport."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, we can compute the optimal minimal backdoor set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_minimal_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-minimal-adjustment\",\n",
" )\n",
"print(ident_minimal_eff.identify_effect(conditional_node_names=conditional_node_names))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can compute the optimal minimum cost backdoor set. Since this graph does not have any costs associated with its nodes, we will not pass any costs to ```identify_effect```. The method will raise a warning, set the costs to one, and compute the optimal minimum cost backdoor set, which as stated above, in this case coincides with the optimal backdoor set of minimum cardinality."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_mincost_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-mincost-adjustment\",\n",
" )\n",
"print(ident_mincost_eff.identify_effect(conditional_node_names=conditional_node_names))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Later, we will compute the optimal minimum cost backdoor set for a graph with costs associated with its nodes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### An example in which sufficient conditions to guarantee the existence of an optimal backdoor set do not hold"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Smucler, Sapienza and Rotnitzky (Biometrika, 2022)](https://arxiv.org/abs/1912.00306) proved that when all variables are observable, or when all observable variables are ancestors of either the treatment, outcome or conditional nodes, then an optimal backdoor set can be found solely based on the graph, and provided an algorithm to compute it. This is the algorithm implemented in the examples above. \n",
"\n",
"However, there exist cases in which an observable optimal backdoor sets cannot be found solely using graphical criteria. For the graph below, [Rotnitzky and Smucler (JMLR, 2021)](https://jmlr.csail.mit.edu/papers/volume21/19-1026/19-1026.pdf) in their Example 5 showed that depending on the law generating the data, the optimal backdoor set could be formed by Z1 and Z2, or be the empty set. More precisely, they showed that there exist probability laws compatible with the graph under which {Z1, Z2} is the most efficient adjustment set, and other probability laws under which the empty set is the most efficient adjustment set; unfortunately one cannot tell from the graph alone which of the two will be better. \n",
"\n",
"Notice that in this graph, the aforementioned sufficient condition for the existence of an optimal backdoor set does not hold, since Z2 is observable but not an ancestor of treatment outcome or the conditional nodes (the empty set in this case). \n",
"\n",
"On the other hand, [Smucler, Sapienza and Rotnitzky (Biometrika, 2022)](https://arxiv.org/abs/1912.00306) showed that optimal minimal and optimal minimum cost (cardinality) observable backdoor sets always exist, as long as there exists at least one backdoor set comprised of observable variables. That is, when the search is restricted to minimal or minimum cost (cardinality) backdoor sets, a situation such as the one described above cannot happen, and the most efficient backdoor set can always be detected based solely on graphical criteria.\n",
"\n",
"For this example, calling the ```identify_effect``` method of an instance of ```CausalIdentifier``` with attribute ```method_name``` equal to \"efficient-adjustment\" will raise an error. For this graph, the optimal minimal and the optimal minimum cardinality backdoor sets are equal to the empty set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graph_str=\"\"\"graph[directed 1 node[id \"X\" label \"X\"]\n",
" node[id \"Y\" label \"Y\"]\n",
" node[id \"Z1\" label \"Z1\"]\n",
" node[id \"Z2\" label \"Z2\"]\n",
" node[id \"U\" label \"U\"] \n",
" edge[source \"X\" target \"Y\"]\n",
" edge[source \"Z1\" target \"X\"]\n",
" edge[source \"Z1\" target \"Z2\"]\n",
" edge[source \"U\" target \"Z2\"]\n",
" edge[source \"U\" target \"Y\"]\n",
" ]\n",
"\"\"\"\n",
"observed_node_names = ['X', 'Y', 'Z1', 'Z2']\n",
"treatment_name = 'X'\n",
"outcome_name = 'Y'\n",
"G = CausalGraph(graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name,\n",
" observed_node_names=observed_node_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, the treatment intervention is static, thus there are no conditional nodes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-adjustment\",\n",
" )\n",
"try:\n",
" results_eff=ident_eff.identify_effect()\n",
"except ValueError as e:\n",
" print(e)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_minimal_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-minimal-adjustment\",\n",
" )\n",
"print(ident_minimal_eff.identify_effect())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_mincost_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-mincost-adjustment\",\n",
" )\n",
"print(ident_mincost_eff.identify_effect())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### An example in which there are no observable adjustment sets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the graph below there are no adjustment sets comprised of only observable variables. In this setting, using any of the above methods will raise an error."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graph_str=\"\"\"graph[directed 1 node[id \"X\" label \"X\"]\n",
" node[id \"Y\" label \"Y\"]\n",
" node[id \"U\" label \"U\"] \n",
" edge[source \"X\" target \"Y\"]\n",
" edge[source \"U\" target \"X\"]\n",
" edge[source \"U\" target \"Y\"]\n",
" ]\n",
"\"\"\"\n",
"observed_node_names = ['X', 'Y']\n",
"treatment_name = 'X'\n",
"outcome_name = 'Y'\n",
"G = CausalGraph(graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name,\n",
" observed_node_names=observed_node_names)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-adjustment\",\n",
" )\n",
"try:\n",
" results_eff=ident_eff.identify_effect()\n",
"except ValueError as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### An example with costs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the graph in Figures 1 and 2 of [Smucler and Rotnitzky (Journal of Causal Inference, 2022)](https://www.degruyter.com/document/doi/10.1515/jci-2022-0015/html). Here we assume that there are positive costs associated to observable variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"graph_str=\"\"\"graph[directed 1 node[id \"L\" label \"L\"]\n",
" node[id \"X\" label \"X\"]\n",
" node[id \"K\" label \"K\"]\n",
" node[id \"B\" label \"B\"]\n",
" node[id \"Q\" label \"Q\"]\n",
" node[id \"R\" label \"R\"]\n",
" node[id \"T\" label \"T\"]\n",
" node[id \"M\" label \"M\"]\n",
" node[id \"Y\" label \"Y\"]\n",
" node[id \"U\" label \"U\"]\n",
" node[id \"F\" label \"F\"]\n",
" edge[source \"L\" target \"X\"]\n",
" edge[source \"X\" target \"M\"]\n",
" edge[source \"K\" target \"X\"]\n",
" edge[source \"B\" target \"K\"]\n",
" edge[source \"B\" target \"R\"]\n",
" edge[source \"Q\" target \"K\"]\n",
" edge[source \"Q\" target \"T\"]\n",
" edge[source \"R\" target \"Y\"]\n",
" edge[source \"T\" target \"Y\"]\n",
" edge[source \"M\" target \"Y\"]\n",
" edge[source \"U\" target \"Y\"]\n",
" edge[source \"U\" target \"F\"]\n",
" ]\n",
" \"\"\"\n",
"observed_node_names=[\"L\", \"X\", \"B\", \"K\", \"Q\", \"R\", \"M\", \"T\", \"Y\", \"F\"]\n",
"conditional_node_names=[\"L\"]\n",
"costs=[\n",
" (\"L\", {\"cost\": 1}),\n",
" (\"B\", {\"cost\": 1}),\n",
" (\"K\", {\"cost\": 4}),\n",
" (\"Q\", {\"cost\": 1}),\n",
" (\"R\", {\"cost\": 2}),\n",
" (\"T\", {\"cost\": 1}),\n",
"]\n",
"G = CausalGraph(graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name,\n",
" observed_node_names=observed_node_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how in this case we pass both the ```conditional_node_names``` list and the ```costs``` list to the ```identify_effect``` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"ident_mincost_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-mincost-adjustment\",\n",
" )\n",
"print(ident_mincost_eff.identify_effect(conditional_node_names=conditional_node_names, costs=costs))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also compute the optimal minimal backdoor set, which in this case is different from the optimal minimum cost backdoor set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_minimal_eff = CausalIdentifier(\n",
" graph=G,\n",
" estimand_type=\"nonparametric-ate\",\n",
" method_name=\"efficient-minimal-adjustment\",\n",
" )\n",
"print(ident_minimal_eff.identify_effect(conditional_node_names=conditional_node_names))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
} | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Finding optimal adjustment sets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Preliminaries"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook illustrates the use of the algorithms developed in [Smucler, Sapienza and Rotnitzky (Biometrika, 2022)](https://arxiv.org/abs/1912.00306) and [Smucler and Rotnitzky (Journal of Causal Inference, 2022)](https://www.degruyter.com/document/doi/10.1515/jci-2022-0015/html) to compute backdoor sets that yield efficient estimators of interventional means and their contrasts (such as the ATE), under various constraints. We begin by recalling some definitions from these papers. We ommit most technical details, and point the reader to the original papers for them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The **optimal backdoor set** is a backdoor set comprised of observable variables that yields non-parametric\n",
"estimators of the interventional mean with the smallest asymptotic variance\n",
"among those that are based on observable backdoor sets. This optimal backdoor\n",
"set always exists when no variables are latent, and the algorithm is guaranteed to compute\n",
"it in this case. Under a non-parametric graphical model with latent variables,\n",
"such a backdoor set can fail to exist. \n",
"\n",
"The **optimal minimal backdoor set** is a minimal backdoor set comprised of observable variables that yields non-parametric\n",
"estimators of the interventional mean with the smallest asymptotic variance\n",
"among those that are based on observable minimal backdoor sets.\n",
"\n",
"The **optimal minimum cost backdoor set** is a minimum cost backdoor set comprised of observable variables that yields non-parametric estimators of the interventional mean with the smallest asymptotic variance\n",
"among those that are based on observable minimum cost backdoor sets. The cost\n",
"of a backdoor set is defined as the sum of the costs of the variables that comprise it. Note that \n",
"when all costs are equal, the optimal minimum cost backdoor set is the optimal backdoor set among those that \n",
"have minimum cardinality.\n",
"\n",
"These various optimal backdoor sets are not only optimal under\n",
"non-parametric graphical models and non-parametric estimators of interventional mean,\n",
"but also under linear graphical models and OLS estimators, per results in [Henckel, Perkovic\n",
"and Maathuis (JRSS B, 2022)](https://arxiv.org/abs/1907.02435)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The design of an observational study"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dowhy.causal_graph import CausalGraph\n",
"from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, EstimandType"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Consider the design of the following hypothetical observational study discussed in [Shrier & Platt (2008)](https://doi.org/10.1186/1471-2288-8-70). The aim of the study is to assess the\n",
"effect of warm-up exercises on injury after playing sports. Suppose that a researcher postulates\n",
"that the graph below represents a causal graphical model. The node warm-up is the treatment variable, which stands for the type of exercise an athlete performs prior to playing sports,\n",
"and the node injury stands for the outcome variable. \n",
"\n",
"Suppose that the goal of the study is to estimate and\n",
"compare the interventional means corresponding to different individualised treatment rules. Each\n",
"rule prescribes the type of warm-up exercise as a function of previous injury and team motivation. For example, one such rule could be to allocate a patient to perform soft warm-up excercises when she has previous injury = 1 and team motivation > 6, but any other (possibly randomised) function of previous injury and team motivation to set the treatment variable could be of interest. More formally, the goal of the study is, for some set of policies such as the aforementioned one, to estimate the mean of the outcome, in a world in which all patients are allocated to a treatment variant according to one of these policies. We will suppose moreover that due to practical limitations, the variables genetics, pre-grame proprioception,\n",
"intra-game proprioception and tissue weakness cannot be measured. Proprioception is an individual's ability to sense the movement, action, and location of their own bodies.\n",
"\n",
"To build the graph, we first create a string declaring the graph's nodes and edges. We then create a list of all observable variables, in this case, all variables in the graph except genetics, pre-game proprioception, intra-game proprioception and tissue weakness. We then pass all this information to the ```CausalGraph``` class, to create an instance of it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graph_str = \"\"\"graph[directed 1 node[id \"coach\" label \"coach\"]\n",
" node[id \"team motivation\" label \"team motivation\"]\n",
" node[id \"fitness\" label \"fitness\"]\n",
" node[id \"pre-game prop\" label \"pre-game prop\"]\n",
" node[id \"intra-game prop\" label \"intra-game prop\"] \n",
" node[id \"neuromusc fatigue\" label \"neuromusc fatigue\"]\n",
" node[id \"warm-up\" label \"warm-up\"]\n",
" node[id \"previous injury\" label \"previous injury\"]\n",
" node[id \"contact sport\" label \"contact sport\"]\n",
" node[id \"genetics\" label \"genetics\"]\n",
" node[id \"injury\" label \"injury\"]\n",
" node[id \"tissue disorder\" label \"tissue disorder\"]\n",
" node[id \"tissue weakness\" label \"tissue weakness\"]\n",
" edge[source \"coach\" target \"team motivation\"]\n",
" edge[source \"coach\" target \"fitness\"]\n",
" edge[source \"fitness\" target \"pre-game prop\"]\n",
" edge[source \"fitness\" target \"neuromusc fatigue\"]\n",
" edge[source \"team motivation\" target \"warm-up\"]\n",
" edge[source \"team motivation\" target \"previous injury\"]\n",
" edge[source \"pre-game prop\" target \"warm-up\"]\n",
" edge[source \"warm-up\" target \"intra-game prop\"]\n",
" edge[source \"contact sport\" target \"previous injury\"]\n",
" edge[source \"contact sport\" target \"intra-game prop\"]\n",
" edge[source \"intra-game prop\" target \"injury\"]\n",
" edge[source \"genetics\" target \"fitness\"]\n",
" edge[source \"genetics\" target \"neuromusc fatigue\"]\n",
" edge[source \"genetics\" target \"tissue disorder\"]\n",
" edge[source \"tissue disorder\" target \"neuromusc fatigue\"]\n",
" edge[source \"tissue disorder\" target \"tissue weakness\"]\n",
" edge[source \"neuromusc fatigue\" target \"intra-game prop\"]\n",
" edge[source \"neuromusc fatigue\" target \"injury\"]\n",
" edge[source \"tissue weakness\" target \"injury\"]\n",
" ]\n",
"\"\"\"\n",
"observed_node_names = [\n",
" \"coach\",\n",
" \"team motivation\",\n",
" \"fitness\",\n",
" \"neuromusc fatigue\",\n",
" \"warm-up\",\n",
" \"previous injury\",\n",
" \"contact sport\",\n",
" \"tissue disorder\",\n",
" \"injury\",\n",
"]\n",
"treatment_name = \"warm-up\"\n",
"outcome_name = \"injury\"\n",
"G = CausalGraph(\n",
" graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name, observed_node_names=observed_node_names\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can easily create a plot of the graph using the ```view_graph``` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"G.view_graph()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we illustrate how to compute the backdoor sets defined in the preliminaries section for the example graph above, using the ```CausalIdentifier``` class. To compute the optimal backdoor set, optimal minimal backdoor set and optimal minimum cost backdoor set, we need to instantiate objects of the ```CausalIdentifier``` class, passing as ```method_name``` the values \"efficient-adjustment\", \"efficient-minimal-adjustment\" and \"efficient-mincost-adjustment\" respectively. Then, we need to call the ```identify_effect``` method, passing as an argument a list of conditional nodes, that is, the nodes that would be used to decide how to allocate treatment. As discussed above, in this example these nodes are previous injury and team motivation. For settings in which we are not interested in individualized interventions, we can just pass an empty list as conditional nodes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conditional_node_names = [\"previous injury\", \"team motivation\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"print(\n",
" ident_eff.identify_effect(\n",
" graph=G, treatment_name=treatment_name, outcome_name=outcome_name, conditional_node_names=conditional_node_names\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Thus, the optimal backdoor set is formed by previous injury, neuromusc fatigue, team motivation, tissue disorder and contact sport."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, we can compute the optimal minimal backdoor set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_minimal_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,\n",
")\n",
"print(\n",
" ident_minimal_eff.identify_effect(\n",
" graph=G, treatment_name=treatment_name, outcome_name=outcome_name, conditional_node_names=conditional_node_names\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can compute the optimal minimum cost backdoor set. Since this graph does not have any costs associated with its nodes, we will not pass any costs to ```identify_effect```. The method will raise a warning, set the costs to one, and compute the optimal minimum cost backdoor set, which as stated above, in this case coincides with the optimal backdoor set of minimum cardinality."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_mincost_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,\n",
")\n",
"print(\n",
" ident_mincost_eff.identify_effect(\n",
" graph=G, treatment_name=treatment_name, outcome_name=outcome_name, conditional_node_names=conditional_node_names\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Later, we will compute the optimal minimum cost backdoor set for a graph with costs associated with its nodes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### An example in which sufficient conditions to guarantee the existence of an optimal backdoor set do not hold"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Smucler, Sapienza and Rotnitzky (Biometrika, 2022)](https://arxiv.org/abs/1912.00306) proved that when all variables are observable, or when all observable variables are ancestors of either the treatment, outcome or conditional nodes, then an optimal backdoor set can be found solely based on the graph, and provided an algorithm to compute it. This is the algorithm implemented in the examples above. \n",
"\n",
"However, there exist cases in which an observable optimal backdoor sets cannot be found solely using graphical criteria. For the graph below, [Rotnitzky and Smucler (JMLR, 2021)](https://jmlr.csail.mit.edu/papers/volume21/19-1026/19-1026.pdf) in their Example 5 showed that depending on the law generating the data, the optimal backdoor set could be formed by Z1 and Z2, or be the empty set. More precisely, they showed that there exist probability laws compatible with the graph under which {Z1, Z2} is the most efficient adjustment set, and other probability laws under which the empty set is the most efficient adjustment set; unfortunately one cannot tell from the graph alone which of the two will be better. \n",
"\n",
"Notice that in this graph, the aforementioned sufficient condition for the existence of an optimal backdoor set does not hold, since Z2 is observable but not an ancestor of treatment outcome or the conditional nodes (the empty set in this case). \n",
"\n",
"On the other hand, [Smucler, Sapienza and Rotnitzky (Biometrika, 2022)](https://arxiv.org/abs/1912.00306) showed that optimal minimal and optimal minimum cost (cardinality) observable backdoor sets always exist, as long as there exists at least one backdoor set comprised of observable variables. That is, when the search is restricted to minimal or minimum cost (cardinality) backdoor sets, a situation such as the one described above cannot happen, and the most efficient backdoor set can always be detected based solely on graphical criteria.\n",
"\n",
"For this example, calling the ```identify_effect``` method of an instance of ```CausalIdentifier``` with attribute ```method_name``` equal to \"efficient-adjustment\" will raise an error. For this graph, the optimal minimal and the optimal minimum cardinality backdoor sets are equal to the empty set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graph_str = \"\"\"graph[directed 1 node[id \"X\" label \"X\"]\n",
" node[id \"Y\" label \"Y\"]\n",
" node[id \"Z1\" label \"Z1\"]\n",
" node[id \"Z2\" label \"Z2\"]\n",
" node[id \"U\" label \"U\"] \n",
" edge[source \"X\" target \"Y\"]\n",
" edge[source \"Z1\" target \"X\"]\n",
" edge[source \"Z1\" target \"Z2\"]\n",
" edge[source \"U\" target \"Z2\"]\n",
" edge[source \"U\" target \"Y\"]\n",
" ]\n",
"\"\"\"\n",
"observed_node_names = [\"X\", \"Y\", \"Z1\", \"Z2\"]\n",
"treatment_name = \"X\"\n",
"outcome_name = \"Y\"\n",
"G = CausalGraph(\n",
" graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name, observed_node_names=observed_node_names\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, the treatment intervention is static, thus there are no conditional nodes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"try:\n",
" results_eff = ident_eff.identify_effect(graph=G, treatment_name=treatment_name, outcome_name=outcome_name)\n",
"except ValueError as e:\n",
" print(e)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,\n",
")\n",
"print(\n",
" ident_minimal_eff.identify_effect(\n",
" graph=G,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,\n",
")\n",
"print(\n",
" ident_mincost_eff.identify_effect(\n",
" graph=G,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### An example in which there are no observable adjustment sets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the graph below there are no adjustment sets comprised of only observable variables. In this setting, using any of the above methods will raise an error."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graph_str = \"\"\"graph[directed 1 node[id \"X\" label \"X\"]\n",
" node[id \"Y\" label \"Y\"]\n",
" node[id \"U\" label \"U\"] \n",
" edge[source \"X\" target \"Y\"]\n",
" edge[source \"U\" target \"X\"]\n",
" edge[source \"U\" target \"Y\"]\n",
" ]\n",
"\"\"\"\n",
"observed_node_names = [\"X\", \"Y\"]\n",
"treatment_name = \"X\"\n",
"outcome_name = \"Y\"\n",
"G = CausalGraph(\n",
" graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name, observed_node_names=observed_node_names\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"try:\n",
" results_eff = ident_eff.identify_effect(\n",
" graph=G,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" )\n",
"except ValueError as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### An example with costs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the graph in Figures 1 and 2 of [Smucler and Rotnitzky (Journal of Causal Inference, 2022)](https://www.degruyter.com/document/doi/10.1515/jci-2022-0015/html). Here we assume that there are positive costs associated to observable variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"graph_str = \"\"\"graph[directed 1 node[id \"L\" label \"L\"]\n",
" node[id \"X\" label \"X\"]\n",
" node[id \"K\" label \"K\"]\n",
" node[id \"B\" label \"B\"]\n",
" node[id \"Q\" label \"Q\"]\n",
" node[id \"R\" label \"R\"]\n",
" node[id \"T\" label \"T\"]\n",
" node[id \"M\" label \"M\"]\n",
" node[id \"Y\" label \"Y\"]\n",
" node[id \"U\" label \"U\"]\n",
" node[id \"F\" label \"F\"]\n",
" edge[source \"L\" target \"X\"]\n",
" edge[source \"X\" target \"M\"]\n",
" edge[source \"K\" target \"X\"]\n",
" edge[source \"B\" target \"K\"]\n",
" edge[source \"B\" target \"R\"]\n",
" edge[source \"Q\" target \"K\"]\n",
" edge[source \"Q\" target \"T\"]\n",
" edge[source \"R\" target \"Y\"]\n",
" edge[source \"T\" target \"Y\"]\n",
" edge[source \"M\" target \"Y\"]\n",
" edge[source \"U\" target \"Y\"]\n",
" edge[source \"U\" target \"F\"]\n",
" ]\n",
" \"\"\"\n",
"observed_node_names = [\"L\", \"X\", \"B\", \"K\", \"Q\", \"R\", \"M\", \"T\", \"Y\", \"F\"]\n",
"conditional_node_names = [\"L\"]\n",
"costs = [\n",
" (\"L\", {\"cost\": 1}),\n",
" (\"B\", {\"cost\": 1}),\n",
" (\"K\", {\"cost\": 4}),\n",
" (\"Q\", {\"cost\": 1}),\n",
" (\"R\", {\"cost\": 2}),\n",
" (\"T\", {\"cost\": 1}),\n",
"]\n",
"G = CausalGraph(\n",
" graph=graph_str, treatment_name=treatment_name, outcome_name=outcome_name, observed_node_names=observed_node_names\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how in this case we pass both the ```conditional_node_names``` list and the ```costs``` list to the ```identify_effect``` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MINCOST_EFFICIENT,\n",
" costs=costs,\n",
")\n",
"print(\n",
" ident_mincost_eff.identify_effect(\n",
" graph=G, treatment_name=treatment_name, outcome_name=outcome_name, conditional_node_names=conditional_node_names\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also compute the optimal minimal backdoor set, which in this case is different from the optimal minimum cost backdoor set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ident_eff = AutoIdentifier(\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_MIN_EFFICIENT,\n",
")\n",
"print(\n",
" ident_minimal_eff.identify_effect(\n",
" graph=G, treatment_name=treatment_name, outcome_name=outcome_name, conditional_node_names=conditional_node_names\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | 4d4df7a4637a5f5e9cd9c5362499b1fb38debeab | db953a63aae205951f028c1050f141e3cdc0c97a | need to remove outputs from this notebook. That's why it is showing a big diff. | amit-sharma | 317 |
py-why/dowhy | 640 | Functional api/identify effect | First of a series of PR to add a functional API according to: https://github.com/py-why/dowhy/wiki/API-proposal-for-v1
* Refactor `identify_effect` to have a functional API
* Created `BackdoorIdentifier` class and extracted the logic from `CausalIdentifier` to be just a Protocol
* Refactor the `identify_effect` method of `BackdoorIdentifier` and `IDIdentifier` to take the graph as parameter
* Moved constants into `enums` for easier type checking
* Backwards compatible with previous CausalModel API
* Added notebook as demo that CausalModel API and new API behaves the same way | null | 2022-09-16 21:35:24+00:00 | 2022-09-27 01:38:27+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import CausalIdentifier
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=CausalIdentifier.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 4d4df7a4637a5f5e9cd9c5362499b1fb38debeab | db953a63aae205951f028c1050f141e3cdc0c97a | we can simply call it CausalEstimandType. We do not need Identifier.
In fact, since dowhy is about causality, EstimandType would also work. | amit-sharma | 318 |
py-why/dowhy | 640 | Functional api/identify effect | First of a series of PR to add a functional API according to: https://github.com/py-why/dowhy/wiki/API-proposal-for-v1
* Refactor `identify_effect` to have a functional API
* Created `BackdoorIdentifier` class and extracted the logic from `CausalIdentifier` to be just a Protocol
* Refactor the `identify_effect` method of `BackdoorIdentifier` and `IDIdentifier` to take the graph as parameter
* Moved constants into `enums` for easier type checking
* Backwards compatible with previous CausalModel API
* Added notebook as demo that CausalModel API and new API behaves the same way | null | 2022-09-16 21:35:24+00:00 | 2022-09-27 01:38:27+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import CausalIdentifier
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=CausalIdentifier.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 4d4df7a4637a5f5e9cd9c5362499b1fb38debeab | db953a63aae205951f028c1050f141e3cdc0c97a | Would you say this should be defined in a more generic package instead of causal_identifier so that it can be reused in a more general way across all of dowhy? | andresmor-ms | 319 |
py-why/dowhy | 640 | Functional api/identify effect | First of a series of PR to add a functional API according to: https://github.com/py-why/dowhy/wiki/API-proposal-for-v1
* Refactor `identify_effect` to have a functional API
* Created `BackdoorIdentifier` class and extracted the logic from `CausalIdentifier` to be just a Protocol
* Refactor the `identify_effect` method of `BackdoorIdentifier` and `IDIdentifier` to take the graph as parameter
* Moved constants into `enums` for easier type checking
* Backwards compatible with previous CausalModel API
* Added notebook as demo that CausalModel API and new API behaves the same way | null | 2022-09-16 21:35:24+00:00 | 2022-09-27 01:38:27+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import CausalIdentifier
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=CausalIdentifier.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 4d4df7a4637a5f5e9cd9c5362499b1fb38debeab | db953a63aae205951f028c1050f141e3cdc0c97a | Let's keep it where it is right now. The current EstimandType is specific to the effect inference pipeline and the interface between identification and estimation. | emrekiciman | 320 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | I am a bit concerned that most structures will be `REJECTED` due to the statistical issues with the independence tests, even if the structures are correct. | bloebp | 321 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | `quality` is by default `GOOD`. Or do you want to emphasize here that this is a parameter? | bloebp | 322 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | (Not for this PR): What about renaming `bootstrap_training_and_sampling` to `fit_and_evaluate_subset` or something along the line? For me, this makes it clearer what it actually does. The multiple samples based on different data subsets are actually due to `confidence_intervals` calling it multiple times. | bloebp | 323 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | Yea, I'm open to renaming that. We could even make it short and call it `fit_and_compute`. Then it would also nice read "fit and compute direct arrow strength" or similar. | petergtz | 324 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | No, I guess this is a copy-and-paste issue. I'll remove it. | petergtz | 325 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | Wait, but didn't we discuss this at length and said that `REJECTED` would only be returned when you really shouldn't continue, whereas in other cases it might still not be correct, but it's somewhat inconclusive. It feels like we're re-discussing something that we've already resolved, no? | petergtz | 326 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | The discussion we had was to give a clear answer whether the graph is "correct", while we can only clearly say "no" if we `REJECTED` it. However, even if the graph is completely correct, there is a (very high) chance that we still return `REJECTED` here. At least, even in our artificial notebook examples, its often rejected although the data perfectly follows the graph structure. | bloebp | 327 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | Created
- https://github.com/py-why/dowhy/issues/689 | petergtz | 328 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | We don't do the validation in the introduction anymore, right? | bloebp | 329 |
py-why/dowhy | 625 | Overhaul GCM introduction | * Remove bibtex from GCM introduction. This is covered in docs/source/cite.rst.
* Add 'Typical usage' section in GCM introduction | null | 2022-09-05 14:12:41+00:00 | 2022-11-02 10:01:42+00:00 | docs/source/user_guide/gcm_based_inference/introduction.rst | Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| Introduction
============
Graphical causal model-based inference, or GCM-based inference for short, is an experimental addition to DoWhy, that
currently works separately from DoWhy's main API. Its experimental status also means that its API may
undergo breaking changes in the future. It will be forming a part of a joint,
`new API <https://github.com/py-why/dowhy/wiki/API-proposal-for-v1>`_. We welcome your comments.
The ``dowhy.gcm`` package provides a variety of ways to answer causal questions and we'll go through them in detail in
section :doc:`answering_causal_questions/index`. However, before diving into them, let's understand
the basic building blocks and usage patterns it is built upon.
The basic building blocks
^^^^^^^^^^^^^^^^^^^^^^^^^
All main features of the GCM-based inference in DoWhy are built around the concept of **graphical causal models**. A
graphical causal model consists of a causal direct acyclic graph (DAG) of variables and a **causal mechanism** for
each of the variables. A causal mechanism defines the conditional distribution of a variable given its parents in the
graph, or, in case of root node variables, simply its distribution.
The most general case of a GCM is a **probabilistic causal model** (PCM), where causal mechanisms are defined by
**conditional stochastic models** and **stochastic models**. In the ``dowhy.gcm`` package, these are represented by
:class:`~ProbabilisticCausalModel`, :class:`~ConditionalStochasticModel`, and :class:`~StochasticModel`.
.. image:: pcm.png
:width: 80%
:align: center
|
In practical terms however, we often use **structural causal models** (SCMs) to represent our GCMs,
and the causal mechanisms are defined by **functional causal models** (FCMs) for non-root nodes and **stochastic
models** for root nodes. An SCM implements the same traits as a PCM, but on top of that, its FCMs allow us to
reason *further* about its data generation process based on parents and noise, and hence, allow us e.g. to compute
counterfactuals.
.. image:: scm.png
:width: 80%
:align: center
|
To keep this introduction simple, we will stick with SCMs for now.
As mentioned above, a causal mechanism describes how the values of a node are influenced by the values of its parent
nodes. We will dive much deeper into the details of causal mechanisms and their meaning in section
:doc:`customizing_model_assignment`. But for this introduction, we will treat them as an opaque thing that is needed
to answer causal questions. With that in mind, the typical steps involved in answering a causal question, are:
1. **Modeling cause-effect relationships as a GCM (causal graph + causal mechanisms):**
::
causal_model = StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X -> Y -> Z
auto.assign_causal_mechanisms(causal_model, data)
Or manually assign causal mechanisms:
::
causal_model.set_causal_mechanism('X', EmpiricalDistribution())
causal_model.set_causal_mechanism('Y', AdditiveNoiseModel(create_linear_regressor()))
causal_model.set_causal_mechanism('Z', AdditiveNoiseModel(create_linear_regressor()))
2. **Fitting the GCM to the data:**
::
fit(causal_model, data)
3. **Answering a causal query based on the GCM:**
::
results = <causal_query>(causal_model, ...)
Where ``<causal_query>`` can be one of multiple functions explained in
:doc:`answering_causal_questions/index`.
Let's look at each of these steps in more detail.
Step 1: Modeling cause-effect relationships as a structural causal model (SCM)
------------------------------------------------------------------------------
The first step is to model the cause-effect relationships between variables relevant
to our use case. We do that in form of a causal graph. A causal graph is a directed acyclic
graph (DAG) where an edge X→Y implies that X causes Y. Statistically, a causal graph encodes the
conditional independence relations between variables. Using the `networkx <https://networkx
.github.io/>`__ library, we can create causal graphs. In the snippet below, we create a chain
X→Y→Z:
>>> import networkx as nx
>>> causal_graph = nx.DiGraph([('X', 'Y'), ('Y', 'Z')])
To answer causal questions using causal graphs, we also have to know the nature of underlying
data-generating process of variables. A causal graph by itself, being a diagram, does not have
any information about the data-generating process. To introduce this data-generating process, we use an SCM that's
built on top of our causal graph:
>>> from dowhy import gcm
>>> causal_model = gcm.StructuralCausalModel(causal_graph)
At this point we would normally load our dataset. For this introduction, we generate
some synthetic data instead. The API takes data in form of Pandas DataFrames:
>>> import numpy as np, pandas as pd
>>> X = np.random.normal(loc=0, scale=1, size=1000)
>>> Y = 2 * X + np.random.normal(loc=0, scale=1, size=1000)
>>> Z = 3 * Y + np.random.normal(loc=0, scale=1, size=1000)
>>> data = pd.DataFrame(data=dict(X=X, Y=Y, Z=Z))
>>> data.head()
X Y Z
0 -2.253500 -3.638579 -10.370047
1 -1.078337 -2.114581 -6.028030
2 -0.962719 -2.157896 -5.750563
3 -0.300316 -0.440721 -2.619954
4 0.127419 0.158185 1.555927
Note how the columns X, Y, Z correspond to our nodes X, Y, Z in the graph constructed above. We can also see how the
values of X influence the values of Y and how the values of Y influence the values of Z in that data set.
The causal model created above allows us now to assign causal mechanisms to each node in the form of functional causal
models. Here, these mechanism can either be assigned manually if, for instance, prior knowledge about certain causal
relationships are known or they can be assigned automatically using the :mod:`~dowhy.gcm.auto` module. For the latter,
we simply call:
>>> gcm.auto.assign_causal_mechanisms(causal_model, data)
In case we want to have more control over the assigned mechanisms, we can do this manually as well. For instance, we can
can assign an empirical distribution to the root node X and linear additive noise models to nodes Y and Z:
>>> causal_model.set_causal_mechanism('X', gcm.EmpiricalDistribution())
>>> causal_model.set_causal_mechanism('Y', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
>>> causal_model.set_causal_mechanism('Z', gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
Section :doc:`customizing_model_assignment` will go into more detail on how one can even define a completely customized
model or add their own implementation.
In the real world, the data comes as an opaque stream of values, where we typically don't know how one
variable influences another. The graphical causal models can help us to deconstruct these causal
relationships again, even though we didn't know them before.
Step 2: Fitting the SCM to the data
-----------------------------------
With the data at hand and the graph constructed earlier, we can now train the SCM using ``fit``:
>>> gcm.fit(causal_model, data)
Fitting means, we learn the generative models of the variables in the SCM according to the data.
Step 3: Answering a causal query based on the SCM
-------------------------------------------------
The last step, answering a causal question, is our actual goal. E.g. we could ask the question:
What will happen to the variable Z if I intervene on Y?
This can be done via the ``interventional_samples`` function. Here's how:
>>> samples = gcm.interventional_samples(causal_model,
>>> {'Y': lambda y: 2.34 },
>>> num_samples_to_draw=1000)
>>> samples.head()
X Y Z
0 1.186229 6.918607 20.682375
1 -0.758809 -0.749365 -2.530045
2 -1.177379 -5.678514 -17.110836
3 -1.211356 -2.152073 -6.212703
4 -0.100224 -0.285047 0.256471
This intervention says: "I'll ignore any causal effects of X on Y, and set every value of Y
to 2.34." So the distribution of X will remain unchanged, whereas values of Y will be at a fixed
value and Z will respond according to its causal model.
These are the basic steps that need to happen. While we can run these steps explicitly, often they get
executed as part of other steps, e.g. when fitting and re-fitting as part of computing confidence
intervals. The next section therefore dives into a more typical usage pattern of the ``dowhy.gcm`` package.
Typical usage of the ``dowhy.gcm`` package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In practice, we may not execute the steps we've learned above explicitly and in this order, but they
get called by other abstractions. E.g. we usually want to use confidence intervals, when answering
causal questions, to quantify our confidence in the results. In this case, ``fit`` will be called on
behalf of us, but we won't call it directly.
Modeling an SCM
---------------
The graph modeling is the same as in `Step 1: Modeling cause-effect relationships as a structural
causal model (SCM)`_. First we construct the causal graph:
>>> causal_model = gcm.StructuralCausalModel(nx.DiGraph([('X', 'Y'), ('Y', 'Z')])) # X → Y → Z
Answering a causal query with confidence intervals
--------------------------------------------------
When we answer a causal query without computing its confidence intervals, what we effectively get,
are point estimates. These are not very useful when trying to assess the confidence in our results.
Instead of calling ``fit`` explicitly, we can achieve its execution by going through the API for
confidence intervals. Let's say we wanted to understand the direct arrow strengths between nodes and
quantify our confidence in those results. This is how we would do it:
>>> strength_median, strength_intervals = gcm.confidence_intervals(
>>> gcm.bootstrap_training_and_sampling(gcm.direct_arrow_strength,
>>> causal_model,
>>> bootstrap_training_data=data,
>>> target_node='Y'))
>>> strength_median, strength_intervals
({('X', 'Y'): 45.90886398636573, ('Z', 'Y'): 15.47129383737619},
{('X', 'Y'): array([42.88319632, 50.43890079]), ('Z', 'Y'): array([13.44202416, 17.74266107])})
In this case, ``fit`` will be called within ``bootstrap_training_and_sampling``, so there is no need
to do this ourselves.
The calling sequence of ``confidence_intervals`` and ``bootstrap_training_and_sampling`` is not
trivial, but exploits the fact our APIs are composable. If not everything makes sense
to you yet, we recommend to simply treat this calling sequence as a ready-to-use construct. Read:
"Get confidence intervals via bootstrapping training and sampling of direct arrow strength". For a
deeper understanding of this construct, see section :doc:`estimating_confidence_intervals`.
With this knowledge, we can now dive deep into the meaning and usages of causal queries in section
:doc:`answering_causal_questions/index`.
| petergtz | b43ce50d7cf58420b88605531b34b5c86f905112 | 560b3460aaf1106f7a053f99ede24cfed094db6f | Removed that paragraph. | petergtz | 330 |
py-why/dowhy | 621 | Add MVG projects docs | Hi @amit-sharma, @emrekiciman, @bloebp,
I'm doing some basic chores here to comply with our own MVG (doing the same for the other repos). Normally, you would "sign" the maintainers file yourself by adding your name, but I figured you approving this PR is equivalent.
Also, is there anyone else who should be in the maintainers table? Feel free to nominate and I'll add them, or push directly onto this branch. | null | 2022-09-05 08:01:55+00:00 | 2022-09-16 07:13:16+00:00 | CONTRIBUTING.md |
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-21-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->
## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tr>
<td align="center"><a href="https://github.com/emrekiciman"><img src="https://avatars3.githubusercontent.com/u/5982160?v=4?s=100" width="100px;" alt=""/><br /><sub><b>emrekiciman</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=emrekiciman" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/amit-sharma"><img src="https://avatars3.githubusercontent.com/u/1775381?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Amit Sharma</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=amit-sharma" title="Code">💻</a></td>
<td align="center"><a href="http://adamkelleher.com"><img src="https://avatars0.githubusercontent.com/u/1762368?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Adam Kelleher</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=akelleh" title="Code">💻</a> <a href="#content-akelleh" title="Content">🖋</a></td>
<td align="center"><a href="https://github.com/Tanmay-Kulkarni101"><img src="https://avatars3.githubusercontent.com/u/17275495?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Tanmay Kulkarni</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=Tanmay-Kulkarni101" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=Tanmay-Kulkarni101" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/vojavocni"><img src="https://avatars.githubusercontent.com/u/40206443?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Aleksandar Jovanovic</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=vojavocni" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/n8sty"><img src="https://avatars.githubusercontent.com/u/2964996?v=4?s=100" width="100px;" alt=""/><br /><sub><b>nate giraldi</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=n8sty" title="Documentation">📖</a> <a href="https://github.com/py-why/dowhy/commits?author=n8sty" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/yaakx"><img src="https://avatars.githubusercontent.com/u/54352800?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Julen Corral</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=yaakx" title="Code">💻</a></td>
</tr>
<tr>
<td align="center"><a href="http://toppare.github.io/"><img src="https://avatars.githubusercontent.com/u/6221127?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Baran Toppare</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=toppare" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/EgorKraevTransferwise"><img src="https://avatars.githubusercontent.com/u/62890791?v=4?s=100" width="100px;" alt=""/><br /><sub><b>EgorKraevTransferwise</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=EgorKraevTransferwise" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/ryanrussell"><img src="https://avatars.githubusercontent.com/u/523300?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ryan Russell</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=ryanrussell" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/MichaelMarien"><img src="https://avatars.githubusercontent.com/u/13829139?v=4?s=100" width="100px;" alt=""/><br /><sub><b>MichaelMarien</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=MichaelMarien" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=MichaelMarien" title="Documentation">📖</a></td>
<td align="center"><a href="http://people.mpi-inf.mpg.de/~kbudhath/"><img src="https://avatars.githubusercontent.com/u/111277?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Kailashbuki</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=kailashbuki" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=kailashbuki" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/petergtz"><img src="https://avatars.githubusercontent.com/u/3618401?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Peter Götz</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=petergtz" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=petergtz" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/bloebp"><img src="https://avatars.githubusercontent.com/u/51325689?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Patrick Blöbaum</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=bloebp" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=bloebp" title="Documentation">📖</a></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/itsoum"><img src="https://avatars.githubusercontent.com/u/9675299?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ilias Tsoumas</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=itsoum" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/elikling"><img src="https://avatars.githubusercontent.com/u/8556526?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Eli Y. Kling</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=elikling" title="Documentation">📖</a></td>
<td align="center"><a href="http://astoeffelbauer.github.io"><img src="https://avatars.githubusercontent.com/u/54737457?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Andreas Stöffelbauer</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=astoeffelbauer" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/esmucler"><img src="https://avatars.githubusercontent.com/u/14080095?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ezequiel Smucler</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=esmucler" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/yemaedahrav"><img src="https://avatars.githubusercontent.com/u/50958687?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Amey Varhade</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=yemaedahrav" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/bkowshik"><img src="https://avatars.githubusercontent.com/u/2899501?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Bhargav Kowshik</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=bkowshik" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/darthtrevino"><img src="https://avatars.githubusercontent.com/u/113544?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Chris Trevino</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=darthtrevino" title="Code">💻</a></td>
</tr>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
## Contributing Guide
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. This Project welcomes contributions, suggestions, and feedback. All contributions, suggestions, and feedback you submitted are accepted under the [Project's license](./LICENSE.md). You represent that if you do not own copyright in the code that you have the authority to submit it under the [Project's license](./LICENSE.md). All feedback, suggestions, or contributions are not confidential.
There are multiple ways to contribute to DoWhy.
You can help us make DoWhy better,
* Adding a Jupyter notebook that describes the use of DoWhy for solving causal
problems
* Helping implement a new method for any of the four steps of causal analysis:
model, identify, estimate, refute
* Integrating DoWhy's API with external implementations for any of the four steps, so that external libraries can be called seamlessly from the `identify_effect`, `estimate_effect` or `refute_estimate` methods.
* Helping extend the DoWhy API so that we can support new functionality like interpretability of the estimate, counterfactual prediction and more.
* Helping update the documentation for DoWhy
If you would like to contribute, you can raise a pull request. If you have
questions before contributing, you can start by opening an issue on Github.
The Project abides by PyWhy's [code of conduct](https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md) and [trademark policy](https://github.com/py-why/governance/blob/main/TRADEMARKS.md).
---
Part of MVG-0.1-beta.
Made with love by GitHub. Licensed under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
|
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-21-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->
## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tr>
<td align="center"><a href="https://github.com/emrekiciman"><img src="https://avatars3.githubusercontent.com/u/5982160?v=4?s=100" width="100px;" alt=""/><br /><sub><b>emrekiciman</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=emrekiciman" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/amit-sharma"><img src="https://avatars3.githubusercontent.com/u/1775381?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Amit Sharma</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=amit-sharma" title="Code">💻</a></td>
<td align="center"><a href="http://adamkelleher.com"><img src="https://avatars0.githubusercontent.com/u/1762368?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Adam Kelleher</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=akelleh" title="Code">💻</a> <a href="#content-akelleh" title="Content">🖋</a></td>
<td align="center"><a href="https://github.com/Tanmay-Kulkarni101"><img src="https://avatars3.githubusercontent.com/u/17275495?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Tanmay Kulkarni</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=Tanmay-Kulkarni101" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=Tanmay-Kulkarni101" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/vojavocni"><img src="https://avatars.githubusercontent.com/u/40206443?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Aleksandar Jovanovic</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=vojavocni" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/n8sty"><img src="https://avatars.githubusercontent.com/u/2964996?v=4?s=100" width="100px;" alt=""/><br /><sub><b>nate giraldi</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=n8sty" title="Documentation">📖</a> <a href="https://github.com/py-why/dowhy/commits?author=n8sty" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/yaakx"><img src="https://avatars.githubusercontent.com/u/54352800?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Julen Corral</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=yaakx" title="Code">💻</a></td>
</tr>
<tr>
<td align="center"><a href="http://toppare.github.io/"><img src="https://avatars.githubusercontent.com/u/6221127?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Baran Toppare</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=toppare" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/EgorKraevTransferwise"><img src="https://avatars.githubusercontent.com/u/62890791?v=4?s=100" width="100px;" alt=""/><br /><sub><b>EgorKraevTransferwise</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=EgorKraevTransferwise" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/ryanrussell"><img src="https://avatars.githubusercontent.com/u/523300?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ryan Russell</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=ryanrussell" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/MichaelMarien"><img src="https://avatars.githubusercontent.com/u/13829139?v=4?s=100" width="100px;" alt=""/><br /><sub><b>MichaelMarien</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=MichaelMarien" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=MichaelMarien" title="Documentation">📖</a></td>
<td align="center"><a href="http://people.mpi-inf.mpg.de/~kbudhath/"><img src="https://avatars.githubusercontent.com/u/111277?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Kailashbuki</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=kailashbuki" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=kailashbuki" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/petergtz"><img src="https://avatars.githubusercontent.com/u/3618401?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Peter Götz</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=petergtz" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=petergtz" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/bloebp"><img src="https://avatars.githubusercontent.com/u/51325689?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Patrick Blöbaum</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=bloebp" title="Code">💻</a> <a href="https://github.com/py-why/dowhy/commits?author=bloebp" title="Documentation">📖</a></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/itsoum"><img src="https://avatars.githubusercontent.com/u/9675299?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ilias Tsoumas</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=itsoum" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/elikling"><img src="https://avatars.githubusercontent.com/u/8556526?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Eli Y. Kling</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=elikling" title="Documentation">📖</a></td>
<td align="center"><a href="http://astoeffelbauer.github.io"><img src="https://avatars.githubusercontent.com/u/54737457?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Andreas Stöffelbauer</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=astoeffelbauer" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/esmucler"><img src="https://avatars.githubusercontent.com/u/14080095?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ezequiel Smucler</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=esmucler" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/yemaedahrav"><img src="https://avatars.githubusercontent.com/u/50958687?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Amey Varhade</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=yemaedahrav" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/bkowshik"><img src="https://avatars.githubusercontent.com/u/2899501?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Bhargav Kowshik</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=bkowshik" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/darthtrevino"><img src="https://avatars.githubusercontent.com/u/113544?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Chris Trevino</b></sub></a><br /><a href="https://github.com/py-why/dowhy/commits?author=darthtrevino" title="Code">💻</a></td>
</tr>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
## Contributing Guide
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. This Project welcomes contributions, suggestions, and feedback. All contributions, suggestions, and feedback you submitted are accepted under the [Project's license](./LICENSE.md). You represent that if you do not own copyright in the code that you have the authority to submit it under the [Project's license](./LICENSE.md). All feedback, suggestions, or contributions are not confidential.
There are multiple ways to contribute to DoWhy.
You can help us make DoWhy better,
* Adding a Jupyter notebook that describes the use of DoWhy for solving causal
problems
* Helping implement a new method for any of the four steps of causal analysis:
model, identify, estimate, refute
* Integrating DoWhy's API with external implementations for any of the four steps, so that external libraries can be called seamlessly from the `identify_effect`, `estimate_effect` or `refute_estimate` methods.
* Helping extend the DoWhy API so that we can support new functionality like interpretability of the estimate, counterfactual prediction and more.
* Helping update the documentation for DoWhy
If you would like to contribute, you can raise a pull request. If you have
questions before contributing, you can start by opening an issue on Github.
The Project abides by PyWhy's [code of conduct](https://github.com/py-why/governance/blob/main/CODE-OF-CONDUCT.md) and [trademark policy](https://github.com/py-why/governance/blob/main/TRADEMARKS.md).
---
Part of MVG-0.1-beta.
Made with love by GitHub. Licensed under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
| petergtz | ad87aef3b728ec68cd81d5b33a618b543691f953 | 93f1852b326c06d8d06e4209078b1283555e8fc5 | shall we change Organization's to PyWhy's? | amit-sharma | 331 |
py-why/dowhy | 619 | Add support for controlled direct treatment effect | This PR adds a new estimand_type="nonparametric-cde" and associated identification method to compute the controlled direct effect.
The estimation can still be handled using standard backdoor estimators since the estimand is always a conditional expectation. | null | 2022-08-31 17:48:43+00:00 | 2022-09-02 08:37:56+00:00 | dowhy/causal_graph.py | import itertools
import logging
import re
import networkx as nx
from dowhy.utils.api import parse_state
from dowhy.utils.graph_operations import daggity_to_dot
class CausalGraph:
"""Class for creating and modifying the causal graph.
Accepts a graph string (or a text file) in gml format (preferred) and dot format. Graphviz-like attributes can be set for edges and nodes. E.g. style="dashed" as an edge attribute ensures that the edge is drawn with a dashed line.
If a graph string is not given, names of treatment, outcome, and confounders, instruments and effect modifiers (if any) can be provided to create the graph.
"""
def __init__(
self,
treatment_name,
outcome_name,
graph=None,
common_cause_names=None,
instrument_names=None,
effect_modifier_names=None,
mediator_names=None,
observed_node_names=None,
missing_nodes_as_confounders=False,
):
self.treatment_name = parse_state(treatment_name)
self.outcome_name = parse_state(outcome_name)
instrument_names = parse_state(instrument_names)
common_cause_names = parse_state(common_cause_names)
effect_modifier_names = parse_state(effect_modifier_names)
mediator_names = parse_state(mediator_names)
self.logger = logging.getLogger(__name__)
# re.sub only takes string parameter so the first if is to avoid error
# if the input is a text file, convert the contained data into string
if isinstance(graph, str) and re.match(r".*\.txt", str(graph)):
text_file = open(graph, "r")
graph = text_file.read()
text_file.close()
if isinstance(graph, str) and re.match(r"^dag", graph): # Convert daggity output to dot format
graph = daggity_to_dot(graph)
if isinstance(graph, str):
graph = graph.replace("\n", " ")
if graph is None:
self._graph = nx.DiGraph()
self._graph = self.build_graph(common_cause_names, instrument_names, effect_modifier_names, mediator_names)
elif re.match(r".*\.dot", graph):
# load dot file
try:
import pygraphviz as pgv
self._graph = nx.DiGraph(nx.drawing.nx_agraph.read_dot(graph))
except Exception as e:
self.logger.error("Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot...")
try:
import pydot
self._graph = nx.DiGraph(nx.drawing.nx_pydot.read_dot(graph))
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(r".*\.gml", graph):
self._graph = nx.DiGraph(nx.read_gml(graph))
elif re.match(r".*graph\s*\{.*\}\s*", graph):
try:
import pygraphviz as pgv
self._graph = pgv.AGraph(graph, strict=True, directed=True)
self._graph = nx.drawing.nx_agraph.from_agraph(self._graph)
except Exception as e:
self.logger.error("Error: Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot ...")
try:
import pydot
P_list = pydot.graph_from_dot_data(graph)
self._graph = nx.drawing.nx_pydot.from_pydot(P_list[0])
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(".*graph\s*\[.*\]\s*", graph):
self._graph = nx.DiGraph(nx.parse_gml(graph))
else:
self.logger.error("Error: Please provide graph (as string or text file) in dot or gml format.")
self.logger.error("Error: Incorrect graph format")
raise ValueError
if missing_nodes_as_confounders:
self._graph = self.add_missing_nodes_as_common_causes(observed_node_names)
# Adding node attributes
self._graph = self.add_node_attributes(observed_node_names)
def view_graph(self, layout="dot", size=(8, 6), file_name="causal_model"):
out_filename = "{}.png".format(file_name)
try:
import pygraphviz as pgv
agraph = nx.drawing.nx_agraph.to_agraph(self._graph)
agraph.graph_attr.update(size="{},{}!".format(size[0], size[0]))
agraph.draw(out_filename, format="png", prog=layout)
except:
self.logger.warning(
"Warning: Pygraphviz cannot be loaded. Check that graphviz and pygraphviz are installed."
)
self.logger.info("Using Matplotlib for plotting")
import matplotlib.pyplot as plt
plt.figure(figsize=size)
solid_edges = [(n1, n2) for n1, n2, e in self._graph.edges(data=True) if "style" not in e]
dashed_edges = [
(n1, n2) for n1, n2, e in self._graph.edges(data=True) if ("style" in e and e["style"] == "dashed")
]
plt.clf()
pos = nx.layout.shell_layout(self._graph)
nx.draw_networkx_nodes(self._graph, pos, node_color="yellow", node_size=400)
nx.draw_networkx_edges(self._graph, pos, edgelist=solid_edges, arrowstyle="-|>", arrowsize=12)
nx.draw_networkx_edges(
self._graph, pos, edgelist=dashed_edges, arrowstyle="-|>", style="dashed", arrowsize=12
)
labels = nx.draw_networkx_labels(self._graph, pos)
plt.axis("off")
plt.savefig(out_filename)
plt.draw()
def build_graph(self, common_cause_names, instrument_names, effect_modifier_names, mediator_names):
"""Creates nodes and edges based on variable names and their semantics.
Currently only considers the graphical representation of "direct" effect modifiers. Thus, all effect modifiers are assumed to be "direct" unless otherwise expressed using a graph. Based on the taxonomy of effect modifiers by VanderWheele and Robins: "Four types of effect modification: A classification based on directed acyclic graphs. Epidemiology. 2007."
"""
for treatment in self.treatment_name:
self._graph.add_node(treatment, observed="yes", penwidth=2)
for outcome in self.outcome_name:
self._graph.add_node(outcome, observed="yes", penwidth=2)
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
# adding penwidth to make the edge bold
self._graph.add_edge(treatment, outcome, penwidth=2)
# Adding common causes
if common_cause_names is not None:
for node_name in common_cause_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(node_name, treatment)
self._graph.add_edge(node_name, outcome)
# Adding instruments
if instrument_names:
if type(instrument_names[0]) != tuple:
if len(self.treatment_name) > 1:
self.logger.info("Assuming Instrument points to all treatments! Use tuples for more granularity.")
for instrument, treatment in itertools.product(instrument_names, self.treatment_name):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
else:
for instrument, treatment in itertools.product(instrument_names):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
# Adding effect modifiers
if effect_modifier_names is not None:
for node_name in effect_modifier_names:
if node_name not in common_cause_names:
for outcome in self.outcome_name:
self._graph.add_node(node_name, observed="yes")
# Assuming the simple form of effect modifier
# that directly causes the outcome.
self._graph.add_edge(node_name, outcome)
# self._graph.add_edge(node_name, outcome, style = "dotted", headport="s", tailport="n")
# self._graph.add_edge(outcome, node_name, style = "dotted", headport="n", tailport="s") # TODO make the ports more general so that they apply not just to top-bottom node configurations
if mediator_names is not None:
for node_name in mediator_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(treatment, node_name)
self._graph.add_edge(node_name, outcome)
return self._graph
def add_node_attributes(self, observed_node_names):
for node_name in self._graph:
if node_name in observed_node_names:
self._graph.nodes[node_name]["observed"] = "yes"
else:
self._graph.nodes[node_name]["observed"] = "no"
return self._graph
def add_missing_nodes_as_common_causes(self, observed_node_names):
# Adding columns in the dataframe as confounders that were not in the graph
for node_name in observed_node_names:
if node_name not in self._graph:
self._graph.add_node(node_name, observed="yes")
for treatment_outcome_node in self.treatment_name + self.outcome_name:
self._graph.add_edge(node_name, treatment_outcome_node)
return self._graph
def add_unobserved_common_cause(self, observed_node_names, color="gray"):
# Adding unobserved confounders
current_common_causes = self.get_common_causes(self.treatment_name, self.outcome_name)
create_new_common_cause = True
for node_name in current_common_causes:
if self._graph.nodes[node_name]["observed"] == "no":
create_new_common_cause = False
if create_new_common_cause:
uc_label = "Unobserved Confounders"
self._graph.add_node("U", label=uc_label, observed="no", color=color, style="filled", fillcolor=color)
for node in self.treatment_name + self.outcome_name:
self._graph.add_edge("U", node)
self.logger.info(
'If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this.'
)
return self._graph
def get_unconfounded_observed_subgraph(self):
observed_nodes = [node for node in self._graph.nodes() if self._graph.nodes[node]["observed"] == "yes"]
return self._graph.subgraph(observed_nodes)
def do_surgery(self, node_names, remove_outgoing_edges=False, remove_incoming_edges=False):
node_names = parse_state(node_names)
new_graph = self._graph.copy()
for node_name in node_names:
if remove_outgoing_edges:
children = new_graph.successors(node_name)
edges_bunch = [(node_name, child) for child in children]
new_graph.remove_edges_from(edges_bunch)
if remove_incoming_edges:
parents = new_graph.predecessors(node_name)
edges_bunch = [(parent, node_name) for parent in parents]
new_graph.remove_edges_from(edges_bunch)
return new_graph
def get_causes(self, nodes, remove_edges=None):
nodes = parse_state(nodes)
new_graph = None
if remove_edges is not None:
new_graph = self._graph.copy() # caution: shallow copy of the attributes
sources = parse_state(remove_edges["sources"])
targets = parse_state(remove_edges["targets"])
for s in sources:
for t in targets:
new_graph.remove_edge(s, t)
causes = set()
for v in nodes:
causes = causes.union(self.get_ancestors(v, new_graph=new_graph))
return causes
def check_dseparation(self, nodes1, nodes2, nodes3, new_graph=None, dseparation_algo="default"):
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_backdoor_set(
self, nodes1, nodes2, nodes3, backdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Assume that the first parameter (nodes1) is the treatment,
the second is the outcome, and the third is the candidate backdoor set
"""
# also return the number of backdoor paths blocked by observed nodes
if dseparation_algo == "default":
if new_graph is None:
# Assume that nodes1 is the treatment
new_graph = self.do_surgery(nodes1, remove_outgoing_edges=True)
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
elif dseparation_algo == "naive":
# ignores new_graph parameter, always uses self._graph
if backdoor_paths is None:
backdoor_paths = self.get_backdoor_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, nodes3) for path in backdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return {"is_dseparated": dseparated}
def get_backdoor_paths(self, nodes1, nodes2):
paths = []
undirected_graph = self._graph.to_undirected()
nodes12 = set(nodes1).union(nodes2)
for node1 in nodes1:
for node2 in nodes2:
backdoor_paths = [
pth
for pth in nx.all_simple_paths(undirected_graph, source=node1, target=node2)
if self._graph.has_edge(pth[1], pth[0])
]
# remove paths that have nodes1\node1 or nodes2\node2 as intermediate nodes
filtered_backdoor_paths = [pth for pth in backdoor_paths if len(nodes12.intersection(pth[1:-1])) == 0]
paths.extend(filtered_backdoor_paths)
self.logger.debug("Backdoor paths: " + str(paths))
return paths
def is_blocked(self, path, conditioned_nodes):
"""Uses d-separation criteria to decide if conditioned_nodes block given path."""
blocked_by_conditioning = False
has_unconditioned_collider = False
for i in range(len(path) - 2):
if self._graph.has_edge(path[i], path[i + 1]) and self._graph.has_edge(
path[i + 2], path[i + 1]
): # collider
collider_descendants = nx.descendants(self._graph, path[i + 1])
if path[i + 1] not in conditioned_nodes and all(
cdesc not in conditioned_nodes for cdesc in collider_descendants
):
has_unconditioned_collider = True
else: # chain or fork
if path[i + 1] in conditioned_nodes:
blocked_by_conditioning = True
break
if blocked_by_conditioning:
return True
elif has_unconditioned_collider:
return True
else:
return False
def get_common_causes(self, nodes1, nodes2):
"""
Assume that nodes1 causes nodes2 (e.g., nodes1 are the treatments and nodes2 are the outcomes)
"""
# TODO Refactor to remove this from here and only implement this logic in causalIdentifier. Unnecessary assumption of nodes1 to be causing nodes2.
nodes1 = parse_state(nodes1)
nodes2 = parse_state(nodes2)
causes_1 = set()
causes_2 = set()
for node in nodes1:
causes_1 = causes_1.union(self.get_ancestors(node))
for node in nodes2:
# Cannot simply compute ancestors, since that will also include nodes1 and its parents (e.g. instruments)
parents_2 = self.get_parents(node)
for parent in parents_2:
if parent not in nodes1:
causes_2 = causes_2.union(
set(
[
parent,
]
)
)
causes_2 = causes_2.union(self.get_ancestors(parent))
return list(causes_1.intersection(causes_2))
def get_effect_modifiers(self, nodes1, nodes2):
modifiers = set()
for node in nodes2:
modifiers = modifiers.union(self.get_ancestors(node))
modifiers = modifiers.difference(nodes1)
for node in nodes1:
modifiers = modifiers.difference(self.get_ancestors(node))
# removing all mediators
for node1 in nodes1:
for node2 in nodes2:
all_directed_paths = nx.all_simple_paths(self._graph, node1, node2)
for path in all_directed_paths:
modifiers = modifiers.difference(path)
return list(modifiers)
def get_parents(self, node_name):
return set(self._graph.predecessors(node_name))
def get_ancestors(self, node_name, new_graph=None):
if new_graph is None:
graph = self._graph
else:
graph = new_graph
return set(nx.ancestors(graph, node_name))
def get_descendants(self, nodes):
descendants = set()
for node_name in nodes:
descendants = descendants.union(set(nx.descendants(self._graph, node_name)))
return descendants
def all_observed(self, node_names):
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] != "yes":
return False
return True
def get_all_nodes(self, include_unobserved=True):
nodes = self._graph.nodes
if not include_unobserved:
nodes = set(self.filter_unobserved_variables(nodes))
return nodes
def filter_unobserved_variables(self, node_names):
observed_node_names = list()
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] == "yes":
observed_node_names.append(node_name)
return observed_node_names
def get_instruments(self, treatment_nodes, outcome_nodes):
treatment_nodes = parse_state(treatment_nodes)
outcome_nodes = parse_state(outcome_nodes)
parents_treatment = set()
for node in treatment_nodes:
parents_treatment = parents_treatment.union(self.get_parents(node))
g_no_parents_treatment = self.do_surgery(treatment_nodes, remove_incoming_edges=True)
ancestors_outcome = set()
for node in outcome_nodes:
ancestors_outcome = ancestors_outcome.union(nx.ancestors(g_no_parents_treatment, node))
# [TODO: double check these work with multivariate implementation:]
# Exclusion
candidate_instruments = parents_treatment.difference(ancestors_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion: %s", candidate_instruments)
# As-if-random setup
children_causes_outcome = [nx.descendants(g_no_parents_treatment, v) for v in ancestors_outcome]
children_causes_outcome = set([item for sublist in children_causes_outcome for item in sublist])
# As-if-random
instruments = candidate_instruments.difference(children_causes_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion and as-if-random: %s", instruments)
return list(instruments)
def get_all_directed_paths(self, nodes1, nodes2):
"""Get all directed paths between sets of nodes.
Currently only supports singleton sets.
"""
node1 = nodes1[0]
node2 = nodes2[0]
# convert the outputted generator into a list
return [p for p in nx.all_simple_paths(self._graph, source=node1, target=node2)]
def has_directed_path(self, nodes1, nodes2):
"""Checks if there is any directed path between two sets of nodes.
Currently only supports singleton sets.
"""
# dpaths = self.get_all_directed_paths(nodes1, nodes2)
# return len(dpaths) > 0
return nx.has_path(self._graph, nodes1[0], nodes2[0])
def get_adjacency_matrix(self, *args, **kwargs):
"""
Get adjacency matrix from the networkx graph
"""
return nx.convert_matrix.to_numpy_matrix(self._graph, *args, **kwargs)
def check_valid_frontdoor_set(
self, nodes1, nodes2, candidate_nodes, frontdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Check if valid the frontdoor variables for set of treatments, nodes1 to set of outcomes, nodes2."""
# Condition 1: node 1 ---> node 2 is intercepted by candidate_nodes
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(candidate_nodes))
elif dseparation_algo == "naive":
if frontdoor_paths is None:
frontdoor_paths = self.get_all_directed_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, candidate_nodes) for path in frontdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_mediation_set(self, nodes1, nodes2, candidate_nodes, mediation_paths=None):
"""Check if candidate nodes are valid mediators for set of treatments, nodes1 to set of outcomes, nodes2."""
if mediation_paths is None:
mediation_paths = self.get_all_directed_paths(nodes1, nodes2)
is_mediator = any([self.is_blocked(path, candidate_nodes) for path in mediation_paths])
return is_mediator
| import itertools
import logging
import re
import networkx as nx
from dowhy.utils.api import parse_state
from dowhy.utils.graph_operations import daggity_to_dot
class CausalGraph:
"""Class for creating and modifying the causal graph.
Accepts a graph string (or a text file) in gml format (preferred) and dot format. Graphviz-like attributes can be set for edges and nodes. E.g. style="dashed" as an edge attribute ensures that the edge is drawn with a dashed line.
If a graph string is not given, names of treatment, outcome, and confounders, instruments and effect modifiers (if any) can be provided to create the graph.
"""
def __init__(
self,
treatment_name,
outcome_name,
graph=None,
common_cause_names=None,
instrument_names=None,
effect_modifier_names=None,
mediator_names=None,
observed_node_names=None,
missing_nodes_as_confounders=False,
):
self.treatment_name = parse_state(treatment_name)
self.outcome_name = parse_state(outcome_name)
instrument_names = parse_state(instrument_names)
common_cause_names = parse_state(common_cause_names)
effect_modifier_names = parse_state(effect_modifier_names)
mediator_names = parse_state(mediator_names)
self.logger = logging.getLogger(__name__)
# re.sub only takes string parameter so the first if is to avoid error
# if the input is a text file, convert the contained data into string
if isinstance(graph, str) and re.match(r".*\.txt", str(graph)):
text_file = open(graph, "r")
graph = text_file.read()
text_file.close()
if isinstance(graph, str) and re.match(r"^dag", graph): # Convert daggity output to dot format
graph = daggity_to_dot(graph)
if isinstance(graph, str):
graph = graph.replace("\n", " ")
if graph is None:
self._graph = nx.DiGraph()
self._graph = self.build_graph(common_cause_names, instrument_names, effect_modifier_names, mediator_names)
elif re.match(r".*\.dot", graph):
# load dot file
try:
import pygraphviz as pgv
self._graph = nx.DiGraph(nx.drawing.nx_agraph.read_dot(graph))
except Exception as e:
self.logger.error("Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot...")
try:
import pydot
self._graph = nx.DiGraph(nx.drawing.nx_pydot.read_dot(graph))
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(r".*\.gml", graph):
self._graph = nx.DiGraph(nx.read_gml(graph))
elif re.match(r".*graph\s*\{.*\}\s*", graph):
try:
import pygraphviz as pgv
self._graph = pgv.AGraph(graph, strict=True, directed=True)
self._graph = nx.drawing.nx_agraph.from_agraph(self._graph)
except Exception as e:
self.logger.error("Error: Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot ...")
try:
import pydot
P_list = pydot.graph_from_dot_data(graph)
self._graph = nx.drawing.nx_pydot.from_pydot(P_list[0])
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(".*graph\s*\[.*\]\s*", graph):
self._graph = nx.DiGraph(nx.parse_gml(graph))
else:
self.logger.error("Error: Please provide graph (as string or text file) in dot or gml format.")
self.logger.error("Error: Incorrect graph format")
raise ValueError
if missing_nodes_as_confounders:
self._graph = self.add_missing_nodes_as_common_causes(observed_node_names)
# Adding node attributes
self._graph = self.add_node_attributes(observed_node_names)
def view_graph(self, layout="dot", size=(8, 6), file_name="causal_model"):
out_filename = "{}.png".format(file_name)
try:
import pygraphviz as pgv
agraph = nx.drawing.nx_agraph.to_agraph(self._graph)
agraph.graph_attr.update(size="{},{}!".format(size[0], size[0]))
agraph.draw(out_filename, format="png", prog=layout)
except:
self.logger.warning(
"Warning: Pygraphviz cannot be loaded. Check that graphviz and pygraphviz are installed."
)
self.logger.info("Using Matplotlib for plotting")
import matplotlib.pyplot as plt
plt.figure(figsize=size)
solid_edges = [(n1, n2) for n1, n2, e in self._graph.edges(data=True) if "style" not in e]
dashed_edges = [
(n1, n2) for n1, n2, e in self._graph.edges(data=True) if ("style" in e and e["style"] == "dashed")
]
plt.clf()
pos = nx.layout.shell_layout(self._graph)
nx.draw_networkx_nodes(self._graph, pos, node_color="yellow", node_size=400)
nx.draw_networkx_edges(self._graph, pos, edgelist=solid_edges, arrowstyle="-|>", arrowsize=12)
nx.draw_networkx_edges(
self._graph, pos, edgelist=dashed_edges, arrowstyle="-|>", style="dashed", arrowsize=12
)
labels = nx.draw_networkx_labels(self._graph, pos)
plt.axis("off")
plt.savefig(out_filename)
plt.draw()
def build_graph(self, common_cause_names, instrument_names, effect_modifier_names, mediator_names):
"""Creates nodes and edges based on variable names and their semantics.
Currently only considers the graphical representation of "direct" effect modifiers. Thus, all effect modifiers are assumed to be "direct" unless otherwise expressed using a graph. Based on the taxonomy of effect modifiers by VanderWheele and Robins: "Four types of effect modification: A classification based on directed acyclic graphs. Epidemiology. 2007."
"""
for treatment in self.treatment_name:
self._graph.add_node(treatment, observed="yes", penwidth=2)
for outcome in self.outcome_name:
self._graph.add_node(outcome, observed="yes", penwidth=2)
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
# adding penwidth to make the edge bold
self._graph.add_edge(treatment, outcome, penwidth=2)
# Adding common causes
if common_cause_names is not None:
for node_name in common_cause_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(node_name, treatment)
self._graph.add_edge(node_name, outcome)
# Adding instruments
if instrument_names:
if type(instrument_names[0]) != tuple:
if len(self.treatment_name) > 1:
self.logger.info("Assuming Instrument points to all treatments! Use tuples for more granularity.")
for instrument, treatment in itertools.product(instrument_names, self.treatment_name):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
else:
for instrument, treatment in itertools.product(instrument_names):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
# Adding effect modifiers
if effect_modifier_names is not None:
for node_name in effect_modifier_names:
if node_name not in common_cause_names:
for outcome in self.outcome_name:
self._graph.add_node(node_name, observed="yes")
# Assuming the simple form of effect modifier
# that directly causes the outcome.
self._graph.add_edge(node_name, outcome)
# self._graph.add_edge(node_name, outcome, style = "dotted", headport="s", tailport="n")
# self._graph.add_edge(outcome, node_name, style = "dotted", headport="n", tailport="s") # TODO make the ports more general so that they apply not just to top-bottom node configurations
if mediator_names is not None:
for node_name in mediator_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(treatment, node_name)
self._graph.add_edge(node_name, outcome)
return self._graph
def add_node_attributes(self, observed_node_names):
for node_name in self._graph:
if node_name in observed_node_names:
self._graph.nodes[node_name]["observed"] = "yes"
else:
self._graph.nodes[node_name]["observed"] = "no"
return self._graph
def add_missing_nodes_as_common_causes(self, observed_node_names):
# Adding columns in the dataframe as confounders that were not in the graph
for node_name in observed_node_names:
if node_name not in self._graph:
self._graph.add_node(node_name, observed="yes")
for treatment_outcome_node in self.treatment_name + self.outcome_name:
self._graph.add_edge(node_name, treatment_outcome_node)
return self._graph
def add_unobserved_common_cause(self, observed_node_names, color="gray"):
# Adding unobserved confounders
current_common_causes = self.get_common_causes(self.treatment_name, self.outcome_name)
create_new_common_cause = True
for node_name in current_common_causes:
if self._graph.nodes[node_name]["observed"] == "no":
create_new_common_cause = False
if create_new_common_cause:
uc_label = "Unobserved Confounders"
self._graph.add_node("U", label=uc_label, observed="no", color=color, style="filled", fillcolor=color)
for node in self.treatment_name + self.outcome_name:
self._graph.add_edge("U", node)
self.logger.info(
'If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this.'
)
return self._graph
def get_unconfounded_observed_subgraph(self):
observed_nodes = [node for node in self._graph.nodes() if self._graph.nodes[node]["observed"] == "yes"]
return self._graph.subgraph(observed_nodes)
def do_surgery(
self,
node_names,
remove_outgoing_edges=False,
remove_incoming_edges=False,
target_node_names=None,
remove_only_direct_edges_to_target=False,
):
"""Method to create a new graph based on the concept of do-surgery.
:param node_names: focal nodes for the surgery
:param remove_outgoing_edges: whether to remove outgoing edges from the focal nodes
:param remove_incoming_edges: whether to remove incoming edges to the focal nodes
:param target_node_names: target nodes (optional) for the surgery, only used when remove_only_direct_edges_to_target is True
:param remove_only_direct_edges_to_target: whether to remove only the direct edges from focal nodes to the target nodes
:returns: a new networkx graph after the specified removal of edges
"""
node_names = parse_state(node_names)
new_graph = self._graph.copy()
for node_name in node_names:
if remove_outgoing_edges:
if remove_only_direct_edges_to_target:
new_graph.remove_edges_from([(node_name, v) for v in target_node_names])
else:
children = new_graph.successors(node_name)
edges_bunch = [(node_name, child) for child in children]
new_graph.remove_edges_from(edges_bunch)
if remove_incoming_edges:
# removal of only direct edges wrt a target is not implemented for incoming edges
parents = new_graph.predecessors(node_name)
edges_bunch = [(parent, node_name) for parent in parents]
new_graph.remove_edges_from(edges_bunch)
return new_graph
def get_causes(self, nodes, remove_edges=None):
nodes = parse_state(nodes)
new_graph = None
if remove_edges is not None:
new_graph = self._graph.copy() # caution: shallow copy of the attributes
sources = parse_state(remove_edges["sources"])
targets = parse_state(remove_edges["targets"])
for s in sources:
for t in targets:
new_graph.remove_edge(s, t)
causes = set()
for v in nodes:
causes = causes.union(self.get_ancestors(v, new_graph=new_graph))
return causes
def check_dseparation(self, nodes1, nodes2, nodes3, new_graph=None, dseparation_algo="default"):
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_backdoor_set(
self, nodes1, nodes2, nodes3, backdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Assume that the first parameter (nodes1) is the treatment,
the second is the outcome, and the third is the candidate backdoor set
"""
# also return the number of backdoor paths blocked by observed nodes
if dseparation_algo == "default":
if new_graph is None:
# Assume that nodes1 is the treatment
new_graph = self.do_surgery(nodes1, remove_outgoing_edges=True)
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
elif dseparation_algo == "naive":
# ignores new_graph parameter, always uses self._graph
if backdoor_paths is None:
backdoor_paths = self.get_backdoor_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, nodes3) for path in backdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return {"is_dseparated": dseparated}
def get_backdoor_paths(self, nodes1, nodes2):
paths = []
undirected_graph = self._graph.to_undirected()
nodes12 = set(nodes1).union(nodes2)
for node1 in nodes1:
for node2 in nodes2:
backdoor_paths = [
pth
for pth in nx.all_simple_paths(undirected_graph, source=node1, target=node2)
if self._graph.has_edge(pth[1], pth[0])
]
# remove paths that have nodes1\node1 or nodes2\node2 as intermediate nodes
filtered_backdoor_paths = [pth for pth in backdoor_paths if len(nodes12.intersection(pth[1:-1])) == 0]
paths.extend(filtered_backdoor_paths)
self.logger.debug("Backdoor paths: " + str(paths))
return paths
def is_blocked(self, path, conditioned_nodes):
"""Uses d-separation criteria to decide if conditioned_nodes block given path."""
blocked_by_conditioning = False
has_unconditioned_collider = False
for i in range(len(path) - 2):
if self._graph.has_edge(path[i], path[i + 1]) and self._graph.has_edge(
path[i + 2], path[i + 1]
): # collider
collider_descendants = nx.descendants(self._graph, path[i + 1])
if path[i + 1] not in conditioned_nodes and all(
cdesc not in conditioned_nodes for cdesc in collider_descendants
):
has_unconditioned_collider = True
else: # chain or fork
if path[i + 1] in conditioned_nodes:
blocked_by_conditioning = True
break
if blocked_by_conditioning:
return True
elif has_unconditioned_collider:
return True
else:
return False
def get_common_causes(self, nodes1, nodes2):
"""
Assume that nodes1 causes nodes2 (e.g., nodes1 are the treatments and nodes2 are the outcomes)
"""
# TODO Refactor to remove this from here and only implement this logic in causalIdentifier. Unnecessary assumption of nodes1 to be causing nodes2.
nodes1 = parse_state(nodes1)
nodes2 = parse_state(nodes2)
causes_1 = set()
causes_2 = set()
for node in nodes1:
causes_1 = causes_1.union(self.get_ancestors(node))
for node in nodes2:
# Cannot simply compute ancestors, since that will also include nodes1 and its parents (e.g. instruments)
parents_2 = self.get_parents(node)
for parent in parents_2:
if parent not in nodes1:
causes_2 = causes_2.union(
set(
[
parent,
]
)
)
causes_2 = causes_2.union(self.get_ancestors(parent))
return list(causes_1.intersection(causes_2))
def get_effect_modifiers(self, nodes1, nodes2):
modifiers = set()
for node in nodes2:
modifiers = modifiers.union(self.get_ancestors(node))
modifiers = modifiers.difference(nodes1)
for node in nodes1:
modifiers = modifiers.difference(self.get_ancestors(node))
# removing all mediators
for node1 in nodes1:
for node2 in nodes2:
all_directed_paths = nx.all_simple_paths(self._graph, node1, node2)
for path in all_directed_paths:
modifiers = modifiers.difference(path)
return list(modifiers)
def get_parents(self, node_name):
return set(self._graph.predecessors(node_name))
def get_ancestors(self, node_name, new_graph=None):
if new_graph is None:
graph = self._graph
else:
graph = new_graph
return set(nx.ancestors(graph, node_name))
def get_descendants(self, nodes):
descendants = set()
for node_name in nodes:
descendants = descendants.union(set(nx.descendants(self._graph, node_name)))
return descendants
def all_observed(self, node_names):
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] != "yes":
return False
return True
def get_all_nodes(self, include_unobserved=True):
nodes = self._graph.nodes
if not include_unobserved:
nodes = set(self.filter_unobserved_variables(nodes))
return nodes
def filter_unobserved_variables(self, node_names):
observed_node_names = list()
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] == "yes":
observed_node_names.append(node_name)
return observed_node_names
def get_instruments(self, treatment_nodes, outcome_nodes):
treatment_nodes = parse_state(treatment_nodes)
outcome_nodes = parse_state(outcome_nodes)
parents_treatment = set()
for node in treatment_nodes:
parents_treatment = parents_treatment.union(self.get_parents(node))
g_no_parents_treatment = self.do_surgery(treatment_nodes, remove_incoming_edges=True)
ancestors_outcome = set()
for node in outcome_nodes:
ancestors_outcome = ancestors_outcome.union(nx.ancestors(g_no_parents_treatment, node))
# [TODO: double check these work with multivariate implementation:]
# Exclusion
candidate_instruments = parents_treatment.difference(ancestors_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion: %s", candidate_instruments)
# As-if-random setup
children_causes_outcome = [nx.descendants(g_no_parents_treatment, v) for v in ancestors_outcome]
children_causes_outcome = set([item for sublist in children_causes_outcome for item in sublist])
# As-if-random
instruments = candidate_instruments.difference(children_causes_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion and as-if-random: %s", instruments)
return list(instruments)
def get_all_directed_paths(self, nodes1, nodes2):
"""Get all directed paths between sets of nodes.
Currently only supports singleton sets.
"""
node1 = nodes1[0]
node2 = nodes2[0]
# convert the outputted generator into a list
return [p for p in nx.all_simple_paths(self._graph, source=node1, target=node2)]
def has_directed_path(self, nodes1, nodes2):
"""Checks if there is any directed path between two sets of nodes.
Currently only supports singleton sets.
"""
# dpaths = self.get_all_directed_paths(nodes1, nodes2)
# return len(dpaths) > 0
return nx.has_path(self._graph, nodes1[0], nodes2[0])
def get_adjacency_matrix(self, *args, **kwargs):
"""
Get adjacency matrix from the networkx graph
"""
return nx.convert_matrix.to_numpy_matrix(self._graph, *args, **kwargs)
def check_valid_frontdoor_set(
self, nodes1, nodes2, candidate_nodes, frontdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Check if valid the frontdoor variables for set of treatments, nodes1 to set of outcomes, nodes2."""
# Condition 1: node 1 ---> node 2 is intercepted by candidate_nodes
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(candidate_nodes))
elif dseparation_algo == "naive":
if frontdoor_paths is None:
frontdoor_paths = self.get_all_directed_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, candidate_nodes) for path in frontdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_mediation_set(self, nodes1, nodes2, candidate_nodes, mediation_paths=None):
"""Check if candidate nodes are valid mediators for set of treatments, nodes1 to set of outcomes, nodes2."""
if mediation_paths is None:
mediation_paths = self.get_all_directed_paths(nodes1, nodes2)
is_mediator = any([self.is_blocked(path, candidate_nodes) for path in mediation_paths])
return is_mediator
| amit-sharma | 2f2463be424093d720ebfa9cc625a06160a44cfe | d67da0e57eda5cc47d180f3de240182dde4aacad | from a variable naming perspective, the distinction between "node_names" and "target_node_names" is unclear. if target_node_names is for supporting direct_edges, can the name reflect that? Or, would it be better to have multiple surgery methods that each present cleaner APIs? | emrekiciman | 332 |
py-why/dowhy | 619 | Add support for controlled direct treatment effect | This PR adds a new estimand_type="nonparametric-cde" and associated identification method to compute the controlled direct effect.
The estimation can still be handled using standard backdoor estimators since the estimand is always a conditional expectation. | null | 2022-08-31 17:48:43+00:00 | 2022-09-02 08:37:56+00:00 | dowhy/causal_graph.py | import itertools
import logging
import re
import networkx as nx
from dowhy.utils.api import parse_state
from dowhy.utils.graph_operations import daggity_to_dot
class CausalGraph:
"""Class for creating and modifying the causal graph.
Accepts a graph string (or a text file) in gml format (preferred) and dot format. Graphviz-like attributes can be set for edges and nodes. E.g. style="dashed" as an edge attribute ensures that the edge is drawn with a dashed line.
If a graph string is not given, names of treatment, outcome, and confounders, instruments and effect modifiers (if any) can be provided to create the graph.
"""
def __init__(
self,
treatment_name,
outcome_name,
graph=None,
common_cause_names=None,
instrument_names=None,
effect_modifier_names=None,
mediator_names=None,
observed_node_names=None,
missing_nodes_as_confounders=False,
):
self.treatment_name = parse_state(treatment_name)
self.outcome_name = parse_state(outcome_name)
instrument_names = parse_state(instrument_names)
common_cause_names = parse_state(common_cause_names)
effect_modifier_names = parse_state(effect_modifier_names)
mediator_names = parse_state(mediator_names)
self.logger = logging.getLogger(__name__)
# re.sub only takes string parameter so the first if is to avoid error
# if the input is a text file, convert the contained data into string
if isinstance(graph, str) and re.match(r".*\.txt", str(graph)):
text_file = open(graph, "r")
graph = text_file.read()
text_file.close()
if isinstance(graph, str) and re.match(r"^dag", graph): # Convert daggity output to dot format
graph = daggity_to_dot(graph)
if isinstance(graph, str):
graph = graph.replace("\n", " ")
if graph is None:
self._graph = nx.DiGraph()
self._graph = self.build_graph(common_cause_names, instrument_names, effect_modifier_names, mediator_names)
elif re.match(r".*\.dot", graph):
# load dot file
try:
import pygraphviz as pgv
self._graph = nx.DiGraph(nx.drawing.nx_agraph.read_dot(graph))
except Exception as e:
self.logger.error("Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot...")
try:
import pydot
self._graph = nx.DiGraph(nx.drawing.nx_pydot.read_dot(graph))
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(r".*\.gml", graph):
self._graph = nx.DiGraph(nx.read_gml(graph))
elif re.match(r".*graph\s*\{.*\}\s*", graph):
try:
import pygraphviz as pgv
self._graph = pgv.AGraph(graph, strict=True, directed=True)
self._graph = nx.drawing.nx_agraph.from_agraph(self._graph)
except Exception as e:
self.logger.error("Error: Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot ...")
try:
import pydot
P_list = pydot.graph_from_dot_data(graph)
self._graph = nx.drawing.nx_pydot.from_pydot(P_list[0])
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(".*graph\s*\[.*\]\s*", graph):
self._graph = nx.DiGraph(nx.parse_gml(graph))
else:
self.logger.error("Error: Please provide graph (as string or text file) in dot or gml format.")
self.logger.error("Error: Incorrect graph format")
raise ValueError
if missing_nodes_as_confounders:
self._graph = self.add_missing_nodes_as_common_causes(observed_node_names)
# Adding node attributes
self._graph = self.add_node_attributes(observed_node_names)
def view_graph(self, layout="dot", size=(8, 6), file_name="causal_model"):
out_filename = "{}.png".format(file_name)
try:
import pygraphviz as pgv
agraph = nx.drawing.nx_agraph.to_agraph(self._graph)
agraph.graph_attr.update(size="{},{}!".format(size[0], size[0]))
agraph.draw(out_filename, format="png", prog=layout)
except:
self.logger.warning(
"Warning: Pygraphviz cannot be loaded. Check that graphviz and pygraphviz are installed."
)
self.logger.info("Using Matplotlib for plotting")
import matplotlib.pyplot as plt
plt.figure(figsize=size)
solid_edges = [(n1, n2) for n1, n2, e in self._graph.edges(data=True) if "style" not in e]
dashed_edges = [
(n1, n2) for n1, n2, e in self._graph.edges(data=True) if ("style" in e and e["style"] == "dashed")
]
plt.clf()
pos = nx.layout.shell_layout(self._graph)
nx.draw_networkx_nodes(self._graph, pos, node_color="yellow", node_size=400)
nx.draw_networkx_edges(self._graph, pos, edgelist=solid_edges, arrowstyle="-|>", arrowsize=12)
nx.draw_networkx_edges(
self._graph, pos, edgelist=dashed_edges, arrowstyle="-|>", style="dashed", arrowsize=12
)
labels = nx.draw_networkx_labels(self._graph, pos)
plt.axis("off")
plt.savefig(out_filename)
plt.draw()
def build_graph(self, common_cause_names, instrument_names, effect_modifier_names, mediator_names):
"""Creates nodes and edges based on variable names and their semantics.
Currently only considers the graphical representation of "direct" effect modifiers. Thus, all effect modifiers are assumed to be "direct" unless otherwise expressed using a graph. Based on the taxonomy of effect modifiers by VanderWheele and Robins: "Four types of effect modification: A classification based on directed acyclic graphs. Epidemiology. 2007."
"""
for treatment in self.treatment_name:
self._graph.add_node(treatment, observed="yes", penwidth=2)
for outcome in self.outcome_name:
self._graph.add_node(outcome, observed="yes", penwidth=2)
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
# adding penwidth to make the edge bold
self._graph.add_edge(treatment, outcome, penwidth=2)
# Adding common causes
if common_cause_names is not None:
for node_name in common_cause_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(node_name, treatment)
self._graph.add_edge(node_name, outcome)
# Adding instruments
if instrument_names:
if type(instrument_names[0]) != tuple:
if len(self.treatment_name) > 1:
self.logger.info("Assuming Instrument points to all treatments! Use tuples for more granularity.")
for instrument, treatment in itertools.product(instrument_names, self.treatment_name):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
else:
for instrument, treatment in itertools.product(instrument_names):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
# Adding effect modifiers
if effect_modifier_names is not None:
for node_name in effect_modifier_names:
if node_name not in common_cause_names:
for outcome in self.outcome_name:
self._graph.add_node(node_name, observed="yes")
# Assuming the simple form of effect modifier
# that directly causes the outcome.
self._graph.add_edge(node_name, outcome)
# self._graph.add_edge(node_name, outcome, style = "dotted", headport="s", tailport="n")
# self._graph.add_edge(outcome, node_name, style = "dotted", headport="n", tailport="s") # TODO make the ports more general so that they apply not just to top-bottom node configurations
if mediator_names is not None:
for node_name in mediator_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(treatment, node_name)
self._graph.add_edge(node_name, outcome)
return self._graph
def add_node_attributes(self, observed_node_names):
for node_name in self._graph:
if node_name in observed_node_names:
self._graph.nodes[node_name]["observed"] = "yes"
else:
self._graph.nodes[node_name]["observed"] = "no"
return self._graph
def add_missing_nodes_as_common_causes(self, observed_node_names):
# Adding columns in the dataframe as confounders that were not in the graph
for node_name in observed_node_names:
if node_name not in self._graph:
self._graph.add_node(node_name, observed="yes")
for treatment_outcome_node in self.treatment_name + self.outcome_name:
self._graph.add_edge(node_name, treatment_outcome_node)
return self._graph
def add_unobserved_common_cause(self, observed_node_names, color="gray"):
# Adding unobserved confounders
current_common_causes = self.get_common_causes(self.treatment_name, self.outcome_name)
create_new_common_cause = True
for node_name in current_common_causes:
if self._graph.nodes[node_name]["observed"] == "no":
create_new_common_cause = False
if create_new_common_cause:
uc_label = "Unobserved Confounders"
self._graph.add_node("U", label=uc_label, observed="no", color=color, style="filled", fillcolor=color)
for node in self.treatment_name + self.outcome_name:
self._graph.add_edge("U", node)
self.logger.info(
'If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this.'
)
return self._graph
def get_unconfounded_observed_subgraph(self):
observed_nodes = [node for node in self._graph.nodes() if self._graph.nodes[node]["observed"] == "yes"]
return self._graph.subgraph(observed_nodes)
def do_surgery(self, node_names, remove_outgoing_edges=False, remove_incoming_edges=False):
node_names = parse_state(node_names)
new_graph = self._graph.copy()
for node_name in node_names:
if remove_outgoing_edges:
children = new_graph.successors(node_name)
edges_bunch = [(node_name, child) for child in children]
new_graph.remove_edges_from(edges_bunch)
if remove_incoming_edges:
parents = new_graph.predecessors(node_name)
edges_bunch = [(parent, node_name) for parent in parents]
new_graph.remove_edges_from(edges_bunch)
return new_graph
def get_causes(self, nodes, remove_edges=None):
nodes = parse_state(nodes)
new_graph = None
if remove_edges is not None:
new_graph = self._graph.copy() # caution: shallow copy of the attributes
sources = parse_state(remove_edges["sources"])
targets = parse_state(remove_edges["targets"])
for s in sources:
for t in targets:
new_graph.remove_edge(s, t)
causes = set()
for v in nodes:
causes = causes.union(self.get_ancestors(v, new_graph=new_graph))
return causes
def check_dseparation(self, nodes1, nodes2, nodes3, new_graph=None, dseparation_algo="default"):
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_backdoor_set(
self, nodes1, nodes2, nodes3, backdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Assume that the first parameter (nodes1) is the treatment,
the second is the outcome, and the third is the candidate backdoor set
"""
# also return the number of backdoor paths blocked by observed nodes
if dseparation_algo == "default":
if new_graph is None:
# Assume that nodes1 is the treatment
new_graph = self.do_surgery(nodes1, remove_outgoing_edges=True)
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
elif dseparation_algo == "naive":
# ignores new_graph parameter, always uses self._graph
if backdoor_paths is None:
backdoor_paths = self.get_backdoor_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, nodes3) for path in backdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return {"is_dseparated": dseparated}
def get_backdoor_paths(self, nodes1, nodes2):
paths = []
undirected_graph = self._graph.to_undirected()
nodes12 = set(nodes1).union(nodes2)
for node1 in nodes1:
for node2 in nodes2:
backdoor_paths = [
pth
for pth in nx.all_simple_paths(undirected_graph, source=node1, target=node2)
if self._graph.has_edge(pth[1], pth[0])
]
# remove paths that have nodes1\node1 or nodes2\node2 as intermediate nodes
filtered_backdoor_paths = [pth for pth in backdoor_paths if len(nodes12.intersection(pth[1:-1])) == 0]
paths.extend(filtered_backdoor_paths)
self.logger.debug("Backdoor paths: " + str(paths))
return paths
def is_blocked(self, path, conditioned_nodes):
"""Uses d-separation criteria to decide if conditioned_nodes block given path."""
blocked_by_conditioning = False
has_unconditioned_collider = False
for i in range(len(path) - 2):
if self._graph.has_edge(path[i], path[i + 1]) and self._graph.has_edge(
path[i + 2], path[i + 1]
): # collider
collider_descendants = nx.descendants(self._graph, path[i + 1])
if path[i + 1] not in conditioned_nodes and all(
cdesc not in conditioned_nodes for cdesc in collider_descendants
):
has_unconditioned_collider = True
else: # chain or fork
if path[i + 1] in conditioned_nodes:
blocked_by_conditioning = True
break
if blocked_by_conditioning:
return True
elif has_unconditioned_collider:
return True
else:
return False
def get_common_causes(self, nodes1, nodes2):
"""
Assume that nodes1 causes nodes2 (e.g., nodes1 are the treatments and nodes2 are the outcomes)
"""
# TODO Refactor to remove this from here and only implement this logic in causalIdentifier. Unnecessary assumption of nodes1 to be causing nodes2.
nodes1 = parse_state(nodes1)
nodes2 = parse_state(nodes2)
causes_1 = set()
causes_2 = set()
for node in nodes1:
causes_1 = causes_1.union(self.get_ancestors(node))
for node in nodes2:
# Cannot simply compute ancestors, since that will also include nodes1 and its parents (e.g. instruments)
parents_2 = self.get_parents(node)
for parent in parents_2:
if parent not in nodes1:
causes_2 = causes_2.union(
set(
[
parent,
]
)
)
causes_2 = causes_2.union(self.get_ancestors(parent))
return list(causes_1.intersection(causes_2))
def get_effect_modifiers(self, nodes1, nodes2):
modifiers = set()
for node in nodes2:
modifiers = modifiers.union(self.get_ancestors(node))
modifiers = modifiers.difference(nodes1)
for node in nodes1:
modifiers = modifiers.difference(self.get_ancestors(node))
# removing all mediators
for node1 in nodes1:
for node2 in nodes2:
all_directed_paths = nx.all_simple_paths(self._graph, node1, node2)
for path in all_directed_paths:
modifiers = modifiers.difference(path)
return list(modifiers)
def get_parents(self, node_name):
return set(self._graph.predecessors(node_name))
def get_ancestors(self, node_name, new_graph=None):
if new_graph is None:
graph = self._graph
else:
graph = new_graph
return set(nx.ancestors(graph, node_name))
def get_descendants(self, nodes):
descendants = set()
for node_name in nodes:
descendants = descendants.union(set(nx.descendants(self._graph, node_name)))
return descendants
def all_observed(self, node_names):
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] != "yes":
return False
return True
def get_all_nodes(self, include_unobserved=True):
nodes = self._graph.nodes
if not include_unobserved:
nodes = set(self.filter_unobserved_variables(nodes))
return nodes
def filter_unobserved_variables(self, node_names):
observed_node_names = list()
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] == "yes":
observed_node_names.append(node_name)
return observed_node_names
def get_instruments(self, treatment_nodes, outcome_nodes):
treatment_nodes = parse_state(treatment_nodes)
outcome_nodes = parse_state(outcome_nodes)
parents_treatment = set()
for node in treatment_nodes:
parents_treatment = parents_treatment.union(self.get_parents(node))
g_no_parents_treatment = self.do_surgery(treatment_nodes, remove_incoming_edges=True)
ancestors_outcome = set()
for node in outcome_nodes:
ancestors_outcome = ancestors_outcome.union(nx.ancestors(g_no_parents_treatment, node))
# [TODO: double check these work with multivariate implementation:]
# Exclusion
candidate_instruments = parents_treatment.difference(ancestors_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion: %s", candidate_instruments)
# As-if-random setup
children_causes_outcome = [nx.descendants(g_no_parents_treatment, v) for v in ancestors_outcome]
children_causes_outcome = set([item for sublist in children_causes_outcome for item in sublist])
# As-if-random
instruments = candidate_instruments.difference(children_causes_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion and as-if-random: %s", instruments)
return list(instruments)
def get_all_directed_paths(self, nodes1, nodes2):
"""Get all directed paths between sets of nodes.
Currently only supports singleton sets.
"""
node1 = nodes1[0]
node2 = nodes2[0]
# convert the outputted generator into a list
return [p for p in nx.all_simple_paths(self._graph, source=node1, target=node2)]
def has_directed_path(self, nodes1, nodes2):
"""Checks if there is any directed path between two sets of nodes.
Currently only supports singleton sets.
"""
# dpaths = self.get_all_directed_paths(nodes1, nodes2)
# return len(dpaths) > 0
return nx.has_path(self._graph, nodes1[0], nodes2[0])
def get_adjacency_matrix(self, *args, **kwargs):
"""
Get adjacency matrix from the networkx graph
"""
return nx.convert_matrix.to_numpy_matrix(self._graph, *args, **kwargs)
def check_valid_frontdoor_set(
self, nodes1, nodes2, candidate_nodes, frontdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Check if valid the frontdoor variables for set of treatments, nodes1 to set of outcomes, nodes2."""
# Condition 1: node 1 ---> node 2 is intercepted by candidate_nodes
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(candidate_nodes))
elif dseparation_algo == "naive":
if frontdoor_paths is None:
frontdoor_paths = self.get_all_directed_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, candidate_nodes) for path in frontdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_mediation_set(self, nodes1, nodes2, candidate_nodes, mediation_paths=None):
"""Check if candidate nodes are valid mediators for set of treatments, nodes1 to set of outcomes, nodes2."""
if mediation_paths is None:
mediation_paths = self.get_all_directed_paths(nodes1, nodes2)
is_mediator = any([self.is_blocked(path, candidate_nodes) for path in mediation_paths])
return is_mediator
| import itertools
import logging
import re
import networkx as nx
from dowhy.utils.api import parse_state
from dowhy.utils.graph_operations import daggity_to_dot
class CausalGraph:
"""Class for creating and modifying the causal graph.
Accepts a graph string (or a text file) in gml format (preferred) and dot format. Graphviz-like attributes can be set for edges and nodes. E.g. style="dashed" as an edge attribute ensures that the edge is drawn with a dashed line.
If a graph string is not given, names of treatment, outcome, and confounders, instruments and effect modifiers (if any) can be provided to create the graph.
"""
def __init__(
self,
treatment_name,
outcome_name,
graph=None,
common_cause_names=None,
instrument_names=None,
effect_modifier_names=None,
mediator_names=None,
observed_node_names=None,
missing_nodes_as_confounders=False,
):
self.treatment_name = parse_state(treatment_name)
self.outcome_name = parse_state(outcome_name)
instrument_names = parse_state(instrument_names)
common_cause_names = parse_state(common_cause_names)
effect_modifier_names = parse_state(effect_modifier_names)
mediator_names = parse_state(mediator_names)
self.logger = logging.getLogger(__name__)
# re.sub only takes string parameter so the first if is to avoid error
# if the input is a text file, convert the contained data into string
if isinstance(graph, str) and re.match(r".*\.txt", str(graph)):
text_file = open(graph, "r")
graph = text_file.read()
text_file.close()
if isinstance(graph, str) and re.match(r"^dag", graph): # Convert daggity output to dot format
graph = daggity_to_dot(graph)
if isinstance(graph, str):
graph = graph.replace("\n", " ")
if graph is None:
self._graph = nx.DiGraph()
self._graph = self.build_graph(common_cause_names, instrument_names, effect_modifier_names, mediator_names)
elif re.match(r".*\.dot", graph):
# load dot file
try:
import pygraphviz as pgv
self._graph = nx.DiGraph(nx.drawing.nx_agraph.read_dot(graph))
except Exception as e:
self.logger.error("Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot...")
try:
import pydot
self._graph = nx.DiGraph(nx.drawing.nx_pydot.read_dot(graph))
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(r".*\.gml", graph):
self._graph = nx.DiGraph(nx.read_gml(graph))
elif re.match(r".*graph\s*\{.*\}\s*", graph):
try:
import pygraphviz as pgv
self._graph = pgv.AGraph(graph, strict=True, directed=True)
self._graph = nx.drawing.nx_agraph.from_agraph(self._graph)
except Exception as e:
self.logger.error("Error: Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot ...")
try:
import pydot
P_list = pydot.graph_from_dot_data(graph)
self._graph = nx.drawing.nx_pydot.from_pydot(P_list[0])
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(".*graph\s*\[.*\]\s*", graph):
self._graph = nx.DiGraph(nx.parse_gml(graph))
else:
self.logger.error("Error: Please provide graph (as string or text file) in dot or gml format.")
self.logger.error("Error: Incorrect graph format")
raise ValueError
if missing_nodes_as_confounders:
self._graph = self.add_missing_nodes_as_common_causes(observed_node_names)
# Adding node attributes
self._graph = self.add_node_attributes(observed_node_names)
def view_graph(self, layout="dot", size=(8, 6), file_name="causal_model"):
out_filename = "{}.png".format(file_name)
try:
import pygraphviz as pgv
agraph = nx.drawing.nx_agraph.to_agraph(self._graph)
agraph.graph_attr.update(size="{},{}!".format(size[0], size[0]))
agraph.draw(out_filename, format="png", prog=layout)
except:
self.logger.warning(
"Warning: Pygraphviz cannot be loaded. Check that graphviz and pygraphviz are installed."
)
self.logger.info("Using Matplotlib for plotting")
import matplotlib.pyplot as plt
plt.figure(figsize=size)
solid_edges = [(n1, n2) for n1, n2, e in self._graph.edges(data=True) if "style" not in e]
dashed_edges = [
(n1, n2) for n1, n2, e in self._graph.edges(data=True) if ("style" in e and e["style"] == "dashed")
]
plt.clf()
pos = nx.layout.shell_layout(self._graph)
nx.draw_networkx_nodes(self._graph, pos, node_color="yellow", node_size=400)
nx.draw_networkx_edges(self._graph, pos, edgelist=solid_edges, arrowstyle="-|>", arrowsize=12)
nx.draw_networkx_edges(
self._graph, pos, edgelist=dashed_edges, arrowstyle="-|>", style="dashed", arrowsize=12
)
labels = nx.draw_networkx_labels(self._graph, pos)
plt.axis("off")
plt.savefig(out_filename)
plt.draw()
def build_graph(self, common_cause_names, instrument_names, effect_modifier_names, mediator_names):
"""Creates nodes and edges based on variable names and their semantics.
Currently only considers the graphical representation of "direct" effect modifiers. Thus, all effect modifiers are assumed to be "direct" unless otherwise expressed using a graph. Based on the taxonomy of effect modifiers by VanderWheele and Robins: "Four types of effect modification: A classification based on directed acyclic graphs. Epidemiology. 2007."
"""
for treatment in self.treatment_name:
self._graph.add_node(treatment, observed="yes", penwidth=2)
for outcome in self.outcome_name:
self._graph.add_node(outcome, observed="yes", penwidth=2)
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
# adding penwidth to make the edge bold
self._graph.add_edge(treatment, outcome, penwidth=2)
# Adding common causes
if common_cause_names is not None:
for node_name in common_cause_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(node_name, treatment)
self._graph.add_edge(node_name, outcome)
# Adding instruments
if instrument_names:
if type(instrument_names[0]) != tuple:
if len(self.treatment_name) > 1:
self.logger.info("Assuming Instrument points to all treatments! Use tuples for more granularity.")
for instrument, treatment in itertools.product(instrument_names, self.treatment_name):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
else:
for instrument, treatment in itertools.product(instrument_names):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
# Adding effect modifiers
if effect_modifier_names is not None:
for node_name in effect_modifier_names:
if node_name not in common_cause_names:
for outcome in self.outcome_name:
self._graph.add_node(node_name, observed="yes")
# Assuming the simple form of effect modifier
# that directly causes the outcome.
self._graph.add_edge(node_name, outcome)
# self._graph.add_edge(node_name, outcome, style = "dotted", headport="s", tailport="n")
# self._graph.add_edge(outcome, node_name, style = "dotted", headport="n", tailport="s") # TODO make the ports more general so that they apply not just to top-bottom node configurations
if mediator_names is not None:
for node_name in mediator_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(treatment, node_name)
self._graph.add_edge(node_name, outcome)
return self._graph
def add_node_attributes(self, observed_node_names):
for node_name in self._graph:
if node_name in observed_node_names:
self._graph.nodes[node_name]["observed"] = "yes"
else:
self._graph.nodes[node_name]["observed"] = "no"
return self._graph
def add_missing_nodes_as_common_causes(self, observed_node_names):
# Adding columns in the dataframe as confounders that were not in the graph
for node_name in observed_node_names:
if node_name not in self._graph:
self._graph.add_node(node_name, observed="yes")
for treatment_outcome_node in self.treatment_name + self.outcome_name:
self._graph.add_edge(node_name, treatment_outcome_node)
return self._graph
def add_unobserved_common_cause(self, observed_node_names, color="gray"):
# Adding unobserved confounders
current_common_causes = self.get_common_causes(self.treatment_name, self.outcome_name)
create_new_common_cause = True
for node_name in current_common_causes:
if self._graph.nodes[node_name]["observed"] == "no":
create_new_common_cause = False
if create_new_common_cause:
uc_label = "Unobserved Confounders"
self._graph.add_node("U", label=uc_label, observed="no", color=color, style="filled", fillcolor=color)
for node in self.treatment_name + self.outcome_name:
self._graph.add_edge("U", node)
self.logger.info(
'If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this.'
)
return self._graph
def get_unconfounded_observed_subgraph(self):
observed_nodes = [node for node in self._graph.nodes() if self._graph.nodes[node]["observed"] == "yes"]
return self._graph.subgraph(observed_nodes)
def do_surgery(
self,
node_names,
remove_outgoing_edges=False,
remove_incoming_edges=False,
target_node_names=None,
remove_only_direct_edges_to_target=False,
):
"""Method to create a new graph based on the concept of do-surgery.
:param node_names: focal nodes for the surgery
:param remove_outgoing_edges: whether to remove outgoing edges from the focal nodes
:param remove_incoming_edges: whether to remove incoming edges to the focal nodes
:param target_node_names: target nodes (optional) for the surgery, only used when remove_only_direct_edges_to_target is True
:param remove_only_direct_edges_to_target: whether to remove only the direct edges from focal nodes to the target nodes
:returns: a new networkx graph after the specified removal of edges
"""
node_names = parse_state(node_names)
new_graph = self._graph.copy()
for node_name in node_names:
if remove_outgoing_edges:
if remove_only_direct_edges_to_target:
new_graph.remove_edges_from([(node_name, v) for v in target_node_names])
else:
children = new_graph.successors(node_name)
edges_bunch = [(node_name, child) for child in children]
new_graph.remove_edges_from(edges_bunch)
if remove_incoming_edges:
# removal of only direct edges wrt a target is not implemented for incoming edges
parents = new_graph.predecessors(node_name)
edges_bunch = [(parent, node_name) for parent in parents]
new_graph.remove_edges_from(edges_bunch)
return new_graph
def get_causes(self, nodes, remove_edges=None):
nodes = parse_state(nodes)
new_graph = None
if remove_edges is not None:
new_graph = self._graph.copy() # caution: shallow copy of the attributes
sources = parse_state(remove_edges["sources"])
targets = parse_state(remove_edges["targets"])
for s in sources:
for t in targets:
new_graph.remove_edge(s, t)
causes = set()
for v in nodes:
causes = causes.union(self.get_ancestors(v, new_graph=new_graph))
return causes
def check_dseparation(self, nodes1, nodes2, nodes3, new_graph=None, dseparation_algo="default"):
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_backdoor_set(
self, nodes1, nodes2, nodes3, backdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Assume that the first parameter (nodes1) is the treatment,
the second is the outcome, and the third is the candidate backdoor set
"""
# also return the number of backdoor paths blocked by observed nodes
if dseparation_algo == "default":
if new_graph is None:
# Assume that nodes1 is the treatment
new_graph = self.do_surgery(nodes1, remove_outgoing_edges=True)
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
elif dseparation_algo == "naive":
# ignores new_graph parameter, always uses self._graph
if backdoor_paths is None:
backdoor_paths = self.get_backdoor_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, nodes3) for path in backdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return {"is_dseparated": dseparated}
def get_backdoor_paths(self, nodes1, nodes2):
paths = []
undirected_graph = self._graph.to_undirected()
nodes12 = set(nodes1).union(nodes2)
for node1 in nodes1:
for node2 in nodes2:
backdoor_paths = [
pth
for pth in nx.all_simple_paths(undirected_graph, source=node1, target=node2)
if self._graph.has_edge(pth[1], pth[0])
]
# remove paths that have nodes1\node1 or nodes2\node2 as intermediate nodes
filtered_backdoor_paths = [pth for pth in backdoor_paths if len(nodes12.intersection(pth[1:-1])) == 0]
paths.extend(filtered_backdoor_paths)
self.logger.debug("Backdoor paths: " + str(paths))
return paths
def is_blocked(self, path, conditioned_nodes):
"""Uses d-separation criteria to decide if conditioned_nodes block given path."""
blocked_by_conditioning = False
has_unconditioned_collider = False
for i in range(len(path) - 2):
if self._graph.has_edge(path[i], path[i + 1]) and self._graph.has_edge(
path[i + 2], path[i + 1]
): # collider
collider_descendants = nx.descendants(self._graph, path[i + 1])
if path[i + 1] not in conditioned_nodes and all(
cdesc not in conditioned_nodes for cdesc in collider_descendants
):
has_unconditioned_collider = True
else: # chain or fork
if path[i + 1] in conditioned_nodes:
blocked_by_conditioning = True
break
if blocked_by_conditioning:
return True
elif has_unconditioned_collider:
return True
else:
return False
def get_common_causes(self, nodes1, nodes2):
"""
Assume that nodes1 causes nodes2 (e.g., nodes1 are the treatments and nodes2 are the outcomes)
"""
# TODO Refactor to remove this from here and only implement this logic in causalIdentifier. Unnecessary assumption of nodes1 to be causing nodes2.
nodes1 = parse_state(nodes1)
nodes2 = parse_state(nodes2)
causes_1 = set()
causes_2 = set()
for node in nodes1:
causes_1 = causes_1.union(self.get_ancestors(node))
for node in nodes2:
# Cannot simply compute ancestors, since that will also include nodes1 and its parents (e.g. instruments)
parents_2 = self.get_parents(node)
for parent in parents_2:
if parent not in nodes1:
causes_2 = causes_2.union(
set(
[
parent,
]
)
)
causes_2 = causes_2.union(self.get_ancestors(parent))
return list(causes_1.intersection(causes_2))
def get_effect_modifiers(self, nodes1, nodes2):
modifiers = set()
for node in nodes2:
modifiers = modifiers.union(self.get_ancestors(node))
modifiers = modifiers.difference(nodes1)
for node in nodes1:
modifiers = modifiers.difference(self.get_ancestors(node))
# removing all mediators
for node1 in nodes1:
for node2 in nodes2:
all_directed_paths = nx.all_simple_paths(self._graph, node1, node2)
for path in all_directed_paths:
modifiers = modifiers.difference(path)
return list(modifiers)
def get_parents(self, node_name):
return set(self._graph.predecessors(node_name))
def get_ancestors(self, node_name, new_graph=None):
if new_graph is None:
graph = self._graph
else:
graph = new_graph
return set(nx.ancestors(graph, node_name))
def get_descendants(self, nodes):
descendants = set()
for node_name in nodes:
descendants = descendants.union(set(nx.descendants(self._graph, node_name)))
return descendants
def all_observed(self, node_names):
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] != "yes":
return False
return True
def get_all_nodes(self, include_unobserved=True):
nodes = self._graph.nodes
if not include_unobserved:
nodes = set(self.filter_unobserved_variables(nodes))
return nodes
def filter_unobserved_variables(self, node_names):
observed_node_names = list()
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] == "yes":
observed_node_names.append(node_name)
return observed_node_names
def get_instruments(self, treatment_nodes, outcome_nodes):
treatment_nodes = parse_state(treatment_nodes)
outcome_nodes = parse_state(outcome_nodes)
parents_treatment = set()
for node in treatment_nodes:
parents_treatment = parents_treatment.union(self.get_parents(node))
g_no_parents_treatment = self.do_surgery(treatment_nodes, remove_incoming_edges=True)
ancestors_outcome = set()
for node in outcome_nodes:
ancestors_outcome = ancestors_outcome.union(nx.ancestors(g_no_parents_treatment, node))
# [TODO: double check these work with multivariate implementation:]
# Exclusion
candidate_instruments = parents_treatment.difference(ancestors_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion: %s", candidate_instruments)
# As-if-random setup
children_causes_outcome = [nx.descendants(g_no_parents_treatment, v) for v in ancestors_outcome]
children_causes_outcome = set([item for sublist in children_causes_outcome for item in sublist])
# As-if-random
instruments = candidate_instruments.difference(children_causes_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion and as-if-random: %s", instruments)
return list(instruments)
def get_all_directed_paths(self, nodes1, nodes2):
"""Get all directed paths between sets of nodes.
Currently only supports singleton sets.
"""
node1 = nodes1[0]
node2 = nodes2[0]
# convert the outputted generator into a list
return [p for p in nx.all_simple_paths(self._graph, source=node1, target=node2)]
def has_directed_path(self, nodes1, nodes2):
"""Checks if there is any directed path between two sets of nodes.
Currently only supports singleton sets.
"""
# dpaths = self.get_all_directed_paths(nodes1, nodes2)
# return len(dpaths) > 0
return nx.has_path(self._graph, nodes1[0], nodes2[0])
def get_adjacency_matrix(self, *args, **kwargs):
"""
Get adjacency matrix from the networkx graph
"""
return nx.convert_matrix.to_numpy_matrix(self._graph, *args, **kwargs)
def check_valid_frontdoor_set(
self, nodes1, nodes2, candidate_nodes, frontdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Check if valid the frontdoor variables for set of treatments, nodes1 to set of outcomes, nodes2."""
# Condition 1: node 1 ---> node 2 is intercepted by candidate_nodes
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(candidate_nodes))
elif dseparation_algo == "naive":
if frontdoor_paths is None:
frontdoor_paths = self.get_all_directed_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, candidate_nodes) for path in frontdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_mediation_set(self, nodes1, nodes2, candidate_nodes, mediation_paths=None):
"""Check if candidate nodes are valid mediators for set of treatments, nodes1 to set of outcomes, nodes2."""
if mediation_paths is None:
mediation_paths = self.get_all_directed_paths(nodes1, nodes2)
is_mediator = any([self.is_blocked(path, candidate_nodes) for path in mediation_paths])
return is_mediator
| amit-sharma | 2f2463be424093d720ebfa9cc625a06160a44cfe | d67da0e57eda5cc47d180f3de240182dde4aacad | instead of renaming target_node_names, maybe "remove_only_direct_edges" could be renamed to "remove_edges_to_targets"? | emrekiciman | 333 |
py-why/dowhy | 619 | Add support for controlled direct treatment effect | This PR adds a new estimand_type="nonparametric-cde" and associated identification method to compute the controlled direct effect.
The estimation can still be handled using standard backdoor estimators since the estimand is always a conditional expectation. | null | 2022-08-31 17:48:43+00:00 | 2022-09-02 08:37:56+00:00 | dowhy/causal_graph.py | import itertools
import logging
import re
import networkx as nx
from dowhy.utils.api import parse_state
from dowhy.utils.graph_operations import daggity_to_dot
class CausalGraph:
"""Class for creating and modifying the causal graph.
Accepts a graph string (or a text file) in gml format (preferred) and dot format. Graphviz-like attributes can be set for edges and nodes. E.g. style="dashed" as an edge attribute ensures that the edge is drawn with a dashed line.
If a graph string is not given, names of treatment, outcome, and confounders, instruments and effect modifiers (if any) can be provided to create the graph.
"""
def __init__(
self,
treatment_name,
outcome_name,
graph=None,
common_cause_names=None,
instrument_names=None,
effect_modifier_names=None,
mediator_names=None,
observed_node_names=None,
missing_nodes_as_confounders=False,
):
self.treatment_name = parse_state(treatment_name)
self.outcome_name = parse_state(outcome_name)
instrument_names = parse_state(instrument_names)
common_cause_names = parse_state(common_cause_names)
effect_modifier_names = parse_state(effect_modifier_names)
mediator_names = parse_state(mediator_names)
self.logger = logging.getLogger(__name__)
# re.sub only takes string parameter so the first if is to avoid error
# if the input is a text file, convert the contained data into string
if isinstance(graph, str) and re.match(r".*\.txt", str(graph)):
text_file = open(graph, "r")
graph = text_file.read()
text_file.close()
if isinstance(graph, str) and re.match(r"^dag", graph): # Convert daggity output to dot format
graph = daggity_to_dot(graph)
if isinstance(graph, str):
graph = graph.replace("\n", " ")
if graph is None:
self._graph = nx.DiGraph()
self._graph = self.build_graph(common_cause_names, instrument_names, effect_modifier_names, mediator_names)
elif re.match(r".*\.dot", graph):
# load dot file
try:
import pygraphviz as pgv
self._graph = nx.DiGraph(nx.drawing.nx_agraph.read_dot(graph))
except Exception as e:
self.logger.error("Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot...")
try:
import pydot
self._graph = nx.DiGraph(nx.drawing.nx_pydot.read_dot(graph))
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(r".*\.gml", graph):
self._graph = nx.DiGraph(nx.read_gml(graph))
elif re.match(r".*graph\s*\{.*\}\s*", graph):
try:
import pygraphviz as pgv
self._graph = pgv.AGraph(graph, strict=True, directed=True)
self._graph = nx.drawing.nx_agraph.from_agraph(self._graph)
except Exception as e:
self.logger.error("Error: Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot ...")
try:
import pydot
P_list = pydot.graph_from_dot_data(graph)
self._graph = nx.drawing.nx_pydot.from_pydot(P_list[0])
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(".*graph\s*\[.*\]\s*", graph):
self._graph = nx.DiGraph(nx.parse_gml(graph))
else:
self.logger.error("Error: Please provide graph (as string or text file) in dot or gml format.")
self.logger.error("Error: Incorrect graph format")
raise ValueError
if missing_nodes_as_confounders:
self._graph = self.add_missing_nodes_as_common_causes(observed_node_names)
# Adding node attributes
self._graph = self.add_node_attributes(observed_node_names)
def view_graph(self, layout="dot", size=(8, 6), file_name="causal_model"):
out_filename = "{}.png".format(file_name)
try:
import pygraphviz as pgv
agraph = nx.drawing.nx_agraph.to_agraph(self._graph)
agraph.graph_attr.update(size="{},{}!".format(size[0], size[0]))
agraph.draw(out_filename, format="png", prog=layout)
except:
self.logger.warning(
"Warning: Pygraphviz cannot be loaded. Check that graphviz and pygraphviz are installed."
)
self.logger.info("Using Matplotlib for plotting")
import matplotlib.pyplot as plt
plt.figure(figsize=size)
solid_edges = [(n1, n2) for n1, n2, e in self._graph.edges(data=True) if "style" not in e]
dashed_edges = [
(n1, n2) for n1, n2, e in self._graph.edges(data=True) if ("style" in e and e["style"] == "dashed")
]
plt.clf()
pos = nx.layout.shell_layout(self._graph)
nx.draw_networkx_nodes(self._graph, pos, node_color="yellow", node_size=400)
nx.draw_networkx_edges(self._graph, pos, edgelist=solid_edges, arrowstyle="-|>", arrowsize=12)
nx.draw_networkx_edges(
self._graph, pos, edgelist=dashed_edges, arrowstyle="-|>", style="dashed", arrowsize=12
)
labels = nx.draw_networkx_labels(self._graph, pos)
plt.axis("off")
plt.savefig(out_filename)
plt.draw()
def build_graph(self, common_cause_names, instrument_names, effect_modifier_names, mediator_names):
"""Creates nodes and edges based on variable names and their semantics.
Currently only considers the graphical representation of "direct" effect modifiers. Thus, all effect modifiers are assumed to be "direct" unless otherwise expressed using a graph. Based on the taxonomy of effect modifiers by VanderWheele and Robins: "Four types of effect modification: A classification based on directed acyclic graphs. Epidemiology. 2007."
"""
for treatment in self.treatment_name:
self._graph.add_node(treatment, observed="yes", penwidth=2)
for outcome in self.outcome_name:
self._graph.add_node(outcome, observed="yes", penwidth=2)
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
# adding penwidth to make the edge bold
self._graph.add_edge(treatment, outcome, penwidth=2)
# Adding common causes
if common_cause_names is not None:
for node_name in common_cause_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(node_name, treatment)
self._graph.add_edge(node_name, outcome)
# Adding instruments
if instrument_names:
if type(instrument_names[0]) != tuple:
if len(self.treatment_name) > 1:
self.logger.info("Assuming Instrument points to all treatments! Use tuples for more granularity.")
for instrument, treatment in itertools.product(instrument_names, self.treatment_name):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
else:
for instrument, treatment in itertools.product(instrument_names):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
# Adding effect modifiers
if effect_modifier_names is not None:
for node_name in effect_modifier_names:
if node_name not in common_cause_names:
for outcome in self.outcome_name:
self._graph.add_node(node_name, observed="yes")
# Assuming the simple form of effect modifier
# that directly causes the outcome.
self._graph.add_edge(node_name, outcome)
# self._graph.add_edge(node_name, outcome, style = "dotted", headport="s", tailport="n")
# self._graph.add_edge(outcome, node_name, style = "dotted", headport="n", tailport="s") # TODO make the ports more general so that they apply not just to top-bottom node configurations
if mediator_names is not None:
for node_name in mediator_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(treatment, node_name)
self._graph.add_edge(node_name, outcome)
return self._graph
def add_node_attributes(self, observed_node_names):
for node_name in self._graph:
if node_name in observed_node_names:
self._graph.nodes[node_name]["observed"] = "yes"
else:
self._graph.nodes[node_name]["observed"] = "no"
return self._graph
def add_missing_nodes_as_common_causes(self, observed_node_names):
# Adding columns in the dataframe as confounders that were not in the graph
for node_name in observed_node_names:
if node_name not in self._graph:
self._graph.add_node(node_name, observed="yes")
for treatment_outcome_node in self.treatment_name + self.outcome_name:
self._graph.add_edge(node_name, treatment_outcome_node)
return self._graph
def add_unobserved_common_cause(self, observed_node_names, color="gray"):
# Adding unobserved confounders
current_common_causes = self.get_common_causes(self.treatment_name, self.outcome_name)
create_new_common_cause = True
for node_name in current_common_causes:
if self._graph.nodes[node_name]["observed"] == "no":
create_new_common_cause = False
if create_new_common_cause:
uc_label = "Unobserved Confounders"
self._graph.add_node("U", label=uc_label, observed="no", color=color, style="filled", fillcolor=color)
for node in self.treatment_name + self.outcome_name:
self._graph.add_edge("U", node)
self.logger.info(
'If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this.'
)
return self._graph
def get_unconfounded_observed_subgraph(self):
observed_nodes = [node for node in self._graph.nodes() if self._graph.nodes[node]["observed"] == "yes"]
return self._graph.subgraph(observed_nodes)
def do_surgery(self, node_names, remove_outgoing_edges=False, remove_incoming_edges=False):
node_names = parse_state(node_names)
new_graph = self._graph.copy()
for node_name in node_names:
if remove_outgoing_edges:
children = new_graph.successors(node_name)
edges_bunch = [(node_name, child) for child in children]
new_graph.remove_edges_from(edges_bunch)
if remove_incoming_edges:
parents = new_graph.predecessors(node_name)
edges_bunch = [(parent, node_name) for parent in parents]
new_graph.remove_edges_from(edges_bunch)
return new_graph
def get_causes(self, nodes, remove_edges=None):
nodes = parse_state(nodes)
new_graph = None
if remove_edges is not None:
new_graph = self._graph.copy() # caution: shallow copy of the attributes
sources = parse_state(remove_edges["sources"])
targets = parse_state(remove_edges["targets"])
for s in sources:
for t in targets:
new_graph.remove_edge(s, t)
causes = set()
for v in nodes:
causes = causes.union(self.get_ancestors(v, new_graph=new_graph))
return causes
def check_dseparation(self, nodes1, nodes2, nodes3, new_graph=None, dseparation_algo="default"):
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_backdoor_set(
self, nodes1, nodes2, nodes3, backdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Assume that the first parameter (nodes1) is the treatment,
the second is the outcome, and the third is the candidate backdoor set
"""
# also return the number of backdoor paths blocked by observed nodes
if dseparation_algo == "default":
if new_graph is None:
# Assume that nodes1 is the treatment
new_graph = self.do_surgery(nodes1, remove_outgoing_edges=True)
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
elif dseparation_algo == "naive":
# ignores new_graph parameter, always uses self._graph
if backdoor_paths is None:
backdoor_paths = self.get_backdoor_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, nodes3) for path in backdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return {"is_dseparated": dseparated}
def get_backdoor_paths(self, nodes1, nodes2):
paths = []
undirected_graph = self._graph.to_undirected()
nodes12 = set(nodes1).union(nodes2)
for node1 in nodes1:
for node2 in nodes2:
backdoor_paths = [
pth
for pth in nx.all_simple_paths(undirected_graph, source=node1, target=node2)
if self._graph.has_edge(pth[1], pth[0])
]
# remove paths that have nodes1\node1 or nodes2\node2 as intermediate nodes
filtered_backdoor_paths = [pth for pth in backdoor_paths if len(nodes12.intersection(pth[1:-1])) == 0]
paths.extend(filtered_backdoor_paths)
self.logger.debug("Backdoor paths: " + str(paths))
return paths
def is_blocked(self, path, conditioned_nodes):
"""Uses d-separation criteria to decide if conditioned_nodes block given path."""
blocked_by_conditioning = False
has_unconditioned_collider = False
for i in range(len(path) - 2):
if self._graph.has_edge(path[i], path[i + 1]) and self._graph.has_edge(
path[i + 2], path[i + 1]
): # collider
collider_descendants = nx.descendants(self._graph, path[i + 1])
if path[i + 1] not in conditioned_nodes and all(
cdesc not in conditioned_nodes for cdesc in collider_descendants
):
has_unconditioned_collider = True
else: # chain or fork
if path[i + 1] in conditioned_nodes:
blocked_by_conditioning = True
break
if blocked_by_conditioning:
return True
elif has_unconditioned_collider:
return True
else:
return False
def get_common_causes(self, nodes1, nodes2):
"""
Assume that nodes1 causes nodes2 (e.g., nodes1 are the treatments and nodes2 are the outcomes)
"""
# TODO Refactor to remove this from here and only implement this logic in causalIdentifier. Unnecessary assumption of nodes1 to be causing nodes2.
nodes1 = parse_state(nodes1)
nodes2 = parse_state(nodes2)
causes_1 = set()
causes_2 = set()
for node in nodes1:
causes_1 = causes_1.union(self.get_ancestors(node))
for node in nodes2:
# Cannot simply compute ancestors, since that will also include nodes1 and its parents (e.g. instruments)
parents_2 = self.get_parents(node)
for parent in parents_2:
if parent not in nodes1:
causes_2 = causes_2.union(
set(
[
parent,
]
)
)
causes_2 = causes_2.union(self.get_ancestors(parent))
return list(causes_1.intersection(causes_2))
def get_effect_modifiers(self, nodes1, nodes2):
modifiers = set()
for node in nodes2:
modifiers = modifiers.union(self.get_ancestors(node))
modifiers = modifiers.difference(nodes1)
for node in nodes1:
modifiers = modifiers.difference(self.get_ancestors(node))
# removing all mediators
for node1 in nodes1:
for node2 in nodes2:
all_directed_paths = nx.all_simple_paths(self._graph, node1, node2)
for path in all_directed_paths:
modifiers = modifiers.difference(path)
return list(modifiers)
def get_parents(self, node_name):
return set(self._graph.predecessors(node_name))
def get_ancestors(self, node_name, new_graph=None):
if new_graph is None:
graph = self._graph
else:
graph = new_graph
return set(nx.ancestors(graph, node_name))
def get_descendants(self, nodes):
descendants = set()
for node_name in nodes:
descendants = descendants.union(set(nx.descendants(self._graph, node_name)))
return descendants
def all_observed(self, node_names):
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] != "yes":
return False
return True
def get_all_nodes(self, include_unobserved=True):
nodes = self._graph.nodes
if not include_unobserved:
nodes = set(self.filter_unobserved_variables(nodes))
return nodes
def filter_unobserved_variables(self, node_names):
observed_node_names = list()
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] == "yes":
observed_node_names.append(node_name)
return observed_node_names
def get_instruments(self, treatment_nodes, outcome_nodes):
treatment_nodes = parse_state(treatment_nodes)
outcome_nodes = parse_state(outcome_nodes)
parents_treatment = set()
for node in treatment_nodes:
parents_treatment = parents_treatment.union(self.get_parents(node))
g_no_parents_treatment = self.do_surgery(treatment_nodes, remove_incoming_edges=True)
ancestors_outcome = set()
for node in outcome_nodes:
ancestors_outcome = ancestors_outcome.union(nx.ancestors(g_no_parents_treatment, node))
# [TODO: double check these work with multivariate implementation:]
# Exclusion
candidate_instruments = parents_treatment.difference(ancestors_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion: %s", candidate_instruments)
# As-if-random setup
children_causes_outcome = [nx.descendants(g_no_parents_treatment, v) for v in ancestors_outcome]
children_causes_outcome = set([item for sublist in children_causes_outcome for item in sublist])
# As-if-random
instruments = candidate_instruments.difference(children_causes_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion and as-if-random: %s", instruments)
return list(instruments)
def get_all_directed_paths(self, nodes1, nodes2):
"""Get all directed paths between sets of nodes.
Currently only supports singleton sets.
"""
node1 = nodes1[0]
node2 = nodes2[0]
# convert the outputted generator into a list
return [p for p in nx.all_simple_paths(self._graph, source=node1, target=node2)]
def has_directed_path(self, nodes1, nodes2):
"""Checks if there is any directed path between two sets of nodes.
Currently only supports singleton sets.
"""
# dpaths = self.get_all_directed_paths(nodes1, nodes2)
# return len(dpaths) > 0
return nx.has_path(self._graph, nodes1[0], nodes2[0])
def get_adjacency_matrix(self, *args, **kwargs):
"""
Get adjacency matrix from the networkx graph
"""
return nx.convert_matrix.to_numpy_matrix(self._graph, *args, **kwargs)
def check_valid_frontdoor_set(
self, nodes1, nodes2, candidate_nodes, frontdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Check if valid the frontdoor variables for set of treatments, nodes1 to set of outcomes, nodes2."""
# Condition 1: node 1 ---> node 2 is intercepted by candidate_nodes
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(candidate_nodes))
elif dseparation_algo == "naive":
if frontdoor_paths is None:
frontdoor_paths = self.get_all_directed_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, candidate_nodes) for path in frontdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_mediation_set(self, nodes1, nodes2, candidate_nodes, mediation_paths=None):
"""Check if candidate nodes are valid mediators for set of treatments, nodes1 to set of outcomes, nodes2."""
if mediation_paths is None:
mediation_paths = self.get_all_directed_paths(nodes1, nodes2)
is_mediator = any([self.is_blocked(path, candidate_nodes) for path in mediation_paths])
return is_mediator
| import itertools
import logging
import re
import networkx as nx
from dowhy.utils.api import parse_state
from dowhy.utils.graph_operations import daggity_to_dot
class CausalGraph:
"""Class for creating and modifying the causal graph.
Accepts a graph string (or a text file) in gml format (preferred) and dot format. Graphviz-like attributes can be set for edges and nodes. E.g. style="dashed" as an edge attribute ensures that the edge is drawn with a dashed line.
If a graph string is not given, names of treatment, outcome, and confounders, instruments and effect modifiers (if any) can be provided to create the graph.
"""
def __init__(
self,
treatment_name,
outcome_name,
graph=None,
common_cause_names=None,
instrument_names=None,
effect_modifier_names=None,
mediator_names=None,
observed_node_names=None,
missing_nodes_as_confounders=False,
):
self.treatment_name = parse_state(treatment_name)
self.outcome_name = parse_state(outcome_name)
instrument_names = parse_state(instrument_names)
common_cause_names = parse_state(common_cause_names)
effect_modifier_names = parse_state(effect_modifier_names)
mediator_names = parse_state(mediator_names)
self.logger = logging.getLogger(__name__)
# re.sub only takes string parameter so the first if is to avoid error
# if the input is a text file, convert the contained data into string
if isinstance(graph, str) and re.match(r".*\.txt", str(graph)):
text_file = open(graph, "r")
graph = text_file.read()
text_file.close()
if isinstance(graph, str) and re.match(r"^dag", graph): # Convert daggity output to dot format
graph = daggity_to_dot(graph)
if isinstance(graph, str):
graph = graph.replace("\n", " ")
if graph is None:
self._graph = nx.DiGraph()
self._graph = self.build_graph(common_cause_names, instrument_names, effect_modifier_names, mediator_names)
elif re.match(r".*\.dot", graph):
# load dot file
try:
import pygraphviz as pgv
self._graph = nx.DiGraph(nx.drawing.nx_agraph.read_dot(graph))
except Exception as e:
self.logger.error("Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot...")
try:
import pydot
self._graph = nx.DiGraph(nx.drawing.nx_pydot.read_dot(graph))
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(r".*\.gml", graph):
self._graph = nx.DiGraph(nx.read_gml(graph))
elif re.match(r".*graph\s*\{.*\}\s*", graph):
try:
import pygraphviz as pgv
self._graph = pgv.AGraph(graph, strict=True, directed=True)
self._graph = nx.drawing.nx_agraph.from_agraph(self._graph)
except Exception as e:
self.logger.error("Error: Pygraphviz cannot be loaded. " + str(e) + "\nTrying pydot ...")
try:
import pydot
P_list = pydot.graph_from_dot_data(graph)
self._graph = nx.drawing.nx_pydot.from_pydot(P_list[0])
except Exception as e:
self.logger.error("Error: Pydot cannot be loaded. " + str(e))
raise e
elif re.match(".*graph\s*\[.*\]\s*", graph):
self._graph = nx.DiGraph(nx.parse_gml(graph))
else:
self.logger.error("Error: Please provide graph (as string or text file) in dot or gml format.")
self.logger.error("Error: Incorrect graph format")
raise ValueError
if missing_nodes_as_confounders:
self._graph = self.add_missing_nodes_as_common_causes(observed_node_names)
# Adding node attributes
self._graph = self.add_node_attributes(observed_node_names)
def view_graph(self, layout="dot", size=(8, 6), file_name="causal_model"):
out_filename = "{}.png".format(file_name)
try:
import pygraphviz as pgv
agraph = nx.drawing.nx_agraph.to_agraph(self._graph)
agraph.graph_attr.update(size="{},{}!".format(size[0], size[0]))
agraph.draw(out_filename, format="png", prog=layout)
except:
self.logger.warning(
"Warning: Pygraphviz cannot be loaded. Check that graphviz and pygraphviz are installed."
)
self.logger.info("Using Matplotlib for plotting")
import matplotlib.pyplot as plt
plt.figure(figsize=size)
solid_edges = [(n1, n2) for n1, n2, e in self._graph.edges(data=True) if "style" not in e]
dashed_edges = [
(n1, n2) for n1, n2, e in self._graph.edges(data=True) if ("style" in e and e["style"] == "dashed")
]
plt.clf()
pos = nx.layout.shell_layout(self._graph)
nx.draw_networkx_nodes(self._graph, pos, node_color="yellow", node_size=400)
nx.draw_networkx_edges(self._graph, pos, edgelist=solid_edges, arrowstyle="-|>", arrowsize=12)
nx.draw_networkx_edges(
self._graph, pos, edgelist=dashed_edges, arrowstyle="-|>", style="dashed", arrowsize=12
)
labels = nx.draw_networkx_labels(self._graph, pos)
plt.axis("off")
plt.savefig(out_filename)
plt.draw()
def build_graph(self, common_cause_names, instrument_names, effect_modifier_names, mediator_names):
"""Creates nodes and edges based on variable names and their semantics.
Currently only considers the graphical representation of "direct" effect modifiers. Thus, all effect modifiers are assumed to be "direct" unless otherwise expressed using a graph. Based on the taxonomy of effect modifiers by VanderWheele and Robins: "Four types of effect modification: A classification based on directed acyclic graphs. Epidemiology. 2007."
"""
for treatment in self.treatment_name:
self._graph.add_node(treatment, observed="yes", penwidth=2)
for outcome in self.outcome_name:
self._graph.add_node(outcome, observed="yes", penwidth=2)
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
# adding penwidth to make the edge bold
self._graph.add_edge(treatment, outcome, penwidth=2)
# Adding common causes
if common_cause_names is not None:
for node_name in common_cause_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(node_name, treatment)
self._graph.add_edge(node_name, outcome)
# Adding instruments
if instrument_names:
if type(instrument_names[0]) != tuple:
if len(self.treatment_name) > 1:
self.logger.info("Assuming Instrument points to all treatments! Use tuples for more granularity.")
for instrument, treatment in itertools.product(instrument_names, self.treatment_name):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
else:
for instrument, treatment in itertools.product(instrument_names):
self._graph.add_node(instrument, observed="yes")
self._graph.add_edge(instrument, treatment)
# Adding effect modifiers
if effect_modifier_names is not None:
for node_name in effect_modifier_names:
if node_name not in common_cause_names:
for outcome in self.outcome_name:
self._graph.add_node(node_name, observed="yes")
# Assuming the simple form of effect modifier
# that directly causes the outcome.
self._graph.add_edge(node_name, outcome)
# self._graph.add_edge(node_name, outcome, style = "dotted", headport="s", tailport="n")
# self._graph.add_edge(outcome, node_name, style = "dotted", headport="n", tailport="s") # TODO make the ports more general so that they apply not just to top-bottom node configurations
if mediator_names is not None:
for node_name in mediator_names:
for treatment, outcome in itertools.product(self.treatment_name, self.outcome_name):
self._graph.add_node(node_name, observed="yes")
self._graph.add_edge(treatment, node_name)
self._graph.add_edge(node_name, outcome)
return self._graph
def add_node_attributes(self, observed_node_names):
for node_name in self._graph:
if node_name in observed_node_names:
self._graph.nodes[node_name]["observed"] = "yes"
else:
self._graph.nodes[node_name]["observed"] = "no"
return self._graph
def add_missing_nodes_as_common_causes(self, observed_node_names):
# Adding columns in the dataframe as confounders that were not in the graph
for node_name in observed_node_names:
if node_name not in self._graph:
self._graph.add_node(node_name, observed="yes")
for treatment_outcome_node in self.treatment_name + self.outcome_name:
self._graph.add_edge(node_name, treatment_outcome_node)
return self._graph
def add_unobserved_common_cause(self, observed_node_names, color="gray"):
# Adding unobserved confounders
current_common_causes = self.get_common_causes(self.treatment_name, self.outcome_name)
create_new_common_cause = True
for node_name in current_common_causes:
if self._graph.nodes[node_name]["observed"] == "no":
create_new_common_cause = False
if create_new_common_cause:
uc_label = "Unobserved Confounders"
self._graph.add_node("U", label=uc_label, observed="no", color=color, style="filled", fillcolor=color)
for node in self.treatment_name + self.outcome_name:
self._graph.add_edge("U", node)
self.logger.info(
'If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named "Unobserved Confounders" to reflect this.'
)
return self._graph
def get_unconfounded_observed_subgraph(self):
observed_nodes = [node for node in self._graph.nodes() if self._graph.nodes[node]["observed"] == "yes"]
return self._graph.subgraph(observed_nodes)
def do_surgery(
self,
node_names,
remove_outgoing_edges=False,
remove_incoming_edges=False,
target_node_names=None,
remove_only_direct_edges_to_target=False,
):
"""Method to create a new graph based on the concept of do-surgery.
:param node_names: focal nodes for the surgery
:param remove_outgoing_edges: whether to remove outgoing edges from the focal nodes
:param remove_incoming_edges: whether to remove incoming edges to the focal nodes
:param target_node_names: target nodes (optional) for the surgery, only used when remove_only_direct_edges_to_target is True
:param remove_only_direct_edges_to_target: whether to remove only the direct edges from focal nodes to the target nodes
:returns: a new networkx graph after the specified removal of edges
"""
node_names = parse_state(node_names)
new_graph = self._graph.copy()
for node_name in node_names:
if remove_outgoing_edges:
if remove_only_direct_edges_to_target:
new_graph.remove_edges_from([(node_name, v) for v in target_node_names])
else:
children = new_graph.successors(node_name)
edges_bunch = [(node_name, child) for child in children]
new_graph.remove_edges_from(edges_bunch)
if remove_incoming_edges:
# removal of only direct edges wrt a target is not implemented for incoming edges
parents = new_graph.predecessors(node_name)
edges_bunch = [(parent, node_name) for parent in parents]
new_graph.remove_edges_from(edges_bunch)
return new_graph
def get_causes(self, nodes, remove_edges=None):
nodes = parse_state(nodes)
new_graph = None
if remove_edges is not None:
new_graph = self._graph.copy() # caution: shallow copy of the attributes
sources = parse_state(remove_edges["sources"])
targets = parse_state(remove_edges["targets"])
for s in sources:
for t in targets:
new_graph.remove_edge(s, t)
causes = set()
for v in nodes:
causes = causes.union(self.get_ancestors(v, new_graph=new_graph))
return causes
def check_dseparation(self, nodes1, nodes2, nodes3, new_graph=None, dseparation_algo="default"):
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_backdoor_set(
self, nodes1, nodes2, nodes3, backdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Assume that the first parameter (nodes1) is the treatment,
the second is the outcome, and the third is the candidate backdoor set
"""
# also return the number of backdoor paths blocked by observed nodes
if dseparation_algo == "default":
if new_graph is None:
# Assume that nodes1 is the treatment
new_graph = self.do_surgery(nodes1, remove_outgoing_edges=True)
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(nodes3))
elif dseparation_algo == "naive":
# ignores new_graph parameter, always uses self._graph
if backdoor_paths is None:
backdoor_paths = self.get_backdoor_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, nodes3) for path in backdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return {"is_dseparated": dseparated}
def get_backdoor_paths(self, nodes1, nodes2):
paths = []
undirected_graph = self._graph.to_undirected()
nodes12 = set(nodes1).union(nodes2)
for node1 in nodes1:
for node2 in nodes2:
backdoor_paths = [
pth
for pth in nx.all_simple_paths(undirected_graph, source=node1, target=node2)
if self._graph.has_edge(pth[1], pth[0])
]
# remove paths that have nodes1\node1 or nodes2\node2 as intermediate nodes
filtered_backdoor_paths = [pth for pth in backdoor_paths if len(nodes12.intersection(pth[1:-1])) == 0]
paths.extend(filtered_backdoor_paths)
self.logger.debug("Backdoor paths: " + str(paths))
return paths
def is_blocked(self, path, conditioned_nodes):
"""Uses d-separation criteria to decide if conditioned_nodes block given path."""
blocked_by_conditioning = False
has_unconditioned_collider = False
for i in range(len(path) - 2):
if self._graph.has_edge(path[i], path[i + 1]) and self._graph.has_edge(
path[i + 2], path[i + 1]
): # collider
collider_descendants = nx.descendants(self._graph, path[i + 1])
if path[i + 1] not in conditioned_nodes and all(
cdesc not in conditioned_nodes for cdesc in collider_descendants
):
has_unconditioned_collider = True
else: # chain or fork
if path[i + 1] in conditioned_nodes:
blocked_by_conditioning = True
break
if blocked_by_conditioning:
return True
elif has_unconditioned_collider:
return True
else:
return False
def get_common_causes(self, nodes1, nodes2):
"""
Assume that nodes1 causes nodes2 (e.g., nodes1 are the treatments and nodes2 are the outcomes)
"""
# TODO Refactor to remove this from here and only implement this logic in causalIdentifier. Unnecessary assumption of nodes1 to be causing nodes2.
nodes1 = parse_state(nodes1)
nodes2 = parse_state(nodes2)
causes_1 = set()
causes_2 = set()
for node in nodes1:
causes_1 = causes_1.union(self.get_ancestors(node))
for node in nodes2:
# Cannot simply compute ancestors, since that will also include nodes1 and its parents (e.g. instruments)
parents_2 = self.get_parents(node)
for parent in parents_2:
if parent not in nodes1:
causes_2 = causes_2.union(
set(
[
parent,
]
)
)
causes_2 = causes_2.union(self.get_ancestors(parent))
return list(causes_1.intersection(causes_2))
def get_effect_modifiers(self, nodes1, nodes2):
modifiers = set()
for node in nodes2:
modifiers = modifiers.union(self.get_ancestors(node))
modifiers = modifiers.difference(nodes1)
for node in nodes1:
modifiers = modifiers.difference(self.get_ancestors(node))
# removing all mediators
for node1 in nodes1:
for node2 in nodes2:
all_directed_paths = nx.all_simple_paths(self._graph, node1, node2)
for path in all_directed_paths:
modifiers = modifiers.difference(path)
return list(modifiers)
def get_parents(self, node_name):
return set(self._graph.predecessors(node_name))
def get_ancestors(self, node_name, new_graph=None):
if new_graph is None:
graph = self._graph
else:
graph = new_graph
return set(nx.ancestors(graph, node_name))
def get_descendants(self, nodes):
descendants = set()
for node_name in nodes:
descendants = descendants.union(set(nx.descendants(self._graph, node_name)))
return descendants
def all_observed(self, node_names):
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] != "yes":
return False
return True
def get_all_nodes(self, include_unobserved=True):
nodes = self._graph.nodes
if not include_unobserved:
nodes = set(self.filter_unobserved_variables(nodes))
return nodes
def filter_unobserved_variables(self, node_names):
observed_node_names = list()
for node_name in node_names:
if self._graph.nodes[node_name]["observed"] == "yes":
observed_node_names.append(node_name)
return observed_node_names
def get_instruments(self, treatment_nodes, outcome_nodes):
treatment_nodes = parse_state(treatment_nodes)
outcome_nodes = parse_state(outcome_nodes)
parents_treatment = set()
for node in treatment_nodes:
parents_treatment = parents_treatment.union(self.get_parents(node))
g_no_parents_treatment = self.do_surgery(treatment_nodes, remove_incoming_edges=True)
ancestors_outcome = set()
for node in outcome_nodes:
ancestors_outcome = ancestors_outcome.union(nx.ancestors(g_no_parents_treatment, node))
# [TODO: double check these work with multivariate implementation:]
# Exclusion
candidate_instruments = parents_treatment.difference(ancestors_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion: %s", candidate_instruments)
# As-if-random setup
children_causes_outcome = [nx.descendants(g_no_parents_treatment, v) for v in ancestors_outcome]
children_causes_outcome = set([item for sublist in children_causes_outcome for item in sublist])
# As-if-random
instruments = candidate_instruments.difference(children_causes_outcome)
self.logger.debug("Candidate instruments after satisfying exclusion and as-if-random: %s", instruments)
return list(instruments)
def get_all_directed_paths(self, nodes1, nodes2):
"""Get all directed paths between sets of nodes.
Currently only supports singleton sets.
"""
node1 = nodes1[0]
node2 = nodes2[0]
# convert the outputted generator into a list
return [p for p in nx.all_simple_paths(self._graph, source=node1, target=node2)]
def has_directed_path(self, nodes1, nodes2):
"""Checks if there is any directed path between two sets of nodes.
Currently only supports singleton sets.
"""
# dpaths = self.get_all_directed_paths(nodes1, nodes2)
# return len(dpaths) > 0
return nx.has_path(self._graph, nodes1[0], nodes2[0])
def get_adjacency_matrix(self, *args, **kwargs):
"""
Get adjacency matrix from the networkx graph
"""
return nx.convert_matrix.to_numpy_matrix(self._graph, *args, **kwargs)
def check_valid_frontdoor_set(
self, nodes1, nodes2, candidate_nodes, frontdoor_paths=None, new_graph=None, dseparation_algo="default"
):
"""Check if valid the frontdoor variables for set of treatments, nodes1 to set of outcomes, nodes2."""
# Condition 1: node 1 ---> node 2 is intercepted by candidate_nodes
if dseparation_algo == "default":
if new_graph is None:
new_graph = self._graph
dseparated = nx.algorithms.d_separated(new_graph, set(nodes1), set(nodes2), set(candidate_nodes))
elif dseparation_algo == "naive":
if frontdoor_paths is None:
frontdoor_paths = self.get_all_directed_paths(nodes1, nodes2)
dseparated = all([self.is_blocked(path, candidate_nodes) for path in frontdoor_paths])
else:
raise ValueError(f"{dseparation_algo} method for d-separation not supported.")
return dseparated
def check_valid_mediation_set(self, nodes1, nodes2, candidate_nodes, mediation_paths=None):
"""Check if candidate nodes are valid mediators for set of treatments, nodes1 to set of outcomes, nodes2."""
if mediation_paths is None:
mediation_paths = self.get_all_directed_paths(nodes1, nodes2)
is_mediator = any([self.is_blocked(path, candidate_nodes) for path in mediation_paths])
return is_mediator
| amit-sharma | 2f2463be424093d720ebfa9cc625a06160a44cfe | d67da0e57eda5cc47d180f3de240182dde4aacad | thanks for the suggestion. I've renamed "remove_only_direct_edges" to "remove_only_direct_edges_to_targets", since we only want to remove the direct edges. It is long but I thought it is best to err on the side of clarity here. | amit-sharma | 334 |
py-why/dowhy | 615 | CI: Split tests into groups to speed them up | * Mark some slow tests as advanced
* Split tests into 4 shards in CI | null | 2022-08-25 22:09:18+00:00 | 2022-09-01 22:51:03+00:00 | poetry.lock | [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.8"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.12.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "*"
joblib = ">=0.13.0"
lightgbm = "*"
numba = "!=0.42.1"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.40.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.10.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.36.0"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.1"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.13.0"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format-nongpl = ["webcolors (>=1.11)", "uri-template", "rfc3986-validator (>0.1.0)", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
format = ["webcolors (>=1.11)", "uri-template", "rfc3987", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.2"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.2"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
dev = ["paramiko", "psutil", "pytest", "pytest-cov", "pytest-mock", "pytest-timeout"]
docs = ["Sphinx (>=3.0.0)", "sphinx-rtd-theme (>=0.5.0)"]
ssh = ["paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2.1"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "dev"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.3"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.39.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["pyod", "pytest", "sphinx-rtd-theme", "sentencepiece", "transformers", "lightgbm", "torch", "numpydoc", "nbsphinx", "opencv-python", "ipython", "pytest-mpl", "matplotlib", "xgboost", "catboost", "sphinx", "pytest-cov", "pyspark", "lime"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "stack-data"
version = "0.4.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tokenize-rt"
version = "4.2.1"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.6.1"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "dev"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "4.0.2"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "cfddbd7903c034dac2384b07f8c0cd40912973cd783f0425cc2eb898e444e01f"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.8-py2.py3-none-any.whl", hash = "sha256:e3305297c744ae53ffa032c45dc347286165e4ffce6875dc662b205db0623d86"},
{file = "asttokens-2.0.8.tar.gz", hash = "sha256:c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.12.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c8c6d2c34f98eb506d4ac48f705f6d959ccdfdeada9a0cc555afcba3233a4d0a"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c9493076c801110ebd9e15a7d162e768d33445367272a8b4f88c70898efa7a8d"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5c5c278771bf405ddf0790fc72149af9d05aec095e00c5ad47e08ab9e8da47d9"},
{file = "econml-0.12.0-cp36-cp36m-win32.whl", hash = "sha256:0477610ae12923a98bc23bdff4f9f5be12c692704d5ec9ebea8f5f4c3010b9ff"},
{file = "econml-0.12.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2cea261af4315d3ec4cb6a91ab6b4aa664b3d3c6f8d235c8778351fb259d69c2"},
{file = "econml-0.12.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3962e0f497c58724269ebf74a295dd8e2f2494e6cff00f3fc582b13b73d9dda8"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:70ce227be0b91fd463d7a48ab4b58aa562793efa779f74254a38a03c42bf6d9e"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:80f78f25b259991872e66665ca64cf108fc405b6d6cf20ebbff30fae392b3258"},
{file = "econml-0.12.0-cp37-cp37m-win32.whl", hash = "sha256:1e780e2e4dbabcff65c2ed488b8751203a6f7bfe0c9c8fd12a0a930b273dbafe"},
{file = "econml-0.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7248c62daa5b8662e1368fe3926a9a7e5ab88c1ee29ae492b7362a9124151071"},
{file = "econml-0.12.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aacdb2c96ea2377f7c4f14fbf3fb6ded5edfccf17decc706b5147564129938b4"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f87a21636872e32e627093711c65d1333be590f0aef0fa2477717442ada57dfb"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1763951ca85b3f34072d0f44e31cebb18e91a281fc94212db9b08124f0efe206"},
{file = "econml-0.12.0-cp38-cp38-win32.whl", hash = "sha256:2e4339e0dcbbc280ac95c4ac446026853f77cd22de9a4c7421b933e717ce8626"},
{file = "econml-0.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:473377685874bf9725a4c6fdd571a858d942abd8a2f4664050da426322d322c7"},
{file = "econml-0.12.0.tar.gz", hash = "sha256:4a8196d6c785e9e0e42b2fa35dd058eb37166c75dcdb90ada953feea6fbb854b"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
{file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.36.0-py3-none-any.whl", hash = "sha256:cb91ef8d5a435d90aeb3ab814b2548c6b515df5bc13b4c5adaa23778f2f79823"},
{file = "fonttools-4.36.0.zip", hash = "sha256:e637d2fe06bddabbfc488e02ef32d04d561e3c71e9ba11abc7782ea753ceb218"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.1-py3-none-any.whl", hash = "sha256:fc0744df3a964ecfd68a6d2debe547fe89db252b8d7bb3db5740aba72edb0e6c"},
{file = "ipywidgets-8.0.1.tar.gz", hash = "sha256:1a296094203309e834f2781a275214d255ac5d266bbfa602f9f6915e1806614c"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.13.0-py3-none-any.whl", hash = "sha256:870a61bb45050b81103faf6a4be00a0a906e06636ffcf0b84f5a2e51faf901ff"},
{file = "jsonschema-4.13.0.tar.gz", hash = "sha256:3776512df4f53f74e6e28fe35717b5b223c1756875486984a31bc9165e7fc920"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.2-py3-none-any.whl", hash = "sha256:98303a281f4004670cdcea2ef4aecba19c580adc297664c593f967025625c8c5"},
{file = "jupyterlab_widgets-3.0.2.tar.gz", hash = "sha256:47ab54cd165aa0cb3bcef1232d77471580cd2c36bbe2153fc5ba31e26ad87320"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e603ca1fb47b913942f3e660a15e55a9ebca906857edfea476ae5f0fe9b457d5"},
{file = "numpy-1.23.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:633679a472934b1c20a12ed0c9a6c9eb167fbb4cb89031939bfd03dd9dbc62b8"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e5226674f6ea79e14e3b91bfbc153fdf3ac13f5cc54ee7bc8fdbe820a32da0"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc02c0235b261925102b1bd586579b7158e9d0d07ecb61148a1799214a4afd5"},
{file = "numpy-1.23.2-cp310-cp310-win32.whl", hash = "sha256:df28dda02c9328e122661f399f7655cdcbcf22ea42daa3650a26bce08a187450"},
{file = "numpy-1.23.2-cp310-cp310-win_amd64.whl", hash = "sha256:8ebf7e194b89bc66b78475bd3624d92980fca4e5bb86dda08d677d786fefc414"},
{file = "numpy-1.23.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dc76bca1ca98f4b122114435f83f1fcf3c0fe48e4e6f660e07996abf2f53903c"},
{file = "numpy-1.23.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ecfdd68d334a6b97472ed032b5b37a30d8217c097acfff15e8452c710e775524"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5593f67e66dea4e237f5af998d31a43e447786b2154ba1ad833676c788f37cde"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac987b35df8c2a2eab495ee206658117e9ce867acf3ccb376a19e83070e69418"},
{file = "numpy-1.23.2-cp311-cp311-win32.whl", hash = "sha256:d98addfd3c8728ee8b2c49126f3c44c703e2b005d4a95998e2167af176a9e722"},
{file = "numpy-1.23.2-cp311-cp311-win_amd64.whl", hash = "sha256:8ecb818231afe5f0f568c81f12ce50f2b828ff2b27487520d85eb44c71313b9e"},
{file = "numpy-1.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:909c56c4d4341ec8315291a105169d8aae732cfb4c250fbc375a1efb7a844f8f"},
{file = "numpy-1.23.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8247f01c4721479e482cc2f9f7d973f3f47810cbc8c65e38fd1bbd3141cc9842"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8b97a8a87cadcd3f94659b4ef6ec056261fa1e1c3317f4193ac231d4df70215"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd5b7ccae24e3d8501ee5563e82febc1771e73bd268eef82a1e8d2b4d556ae66"},
{file = "numpy-1.23.2-cp38-cp38-win32.whl", hash = "sha256:9b83d48e464f393d46e8dd8171687394d39bc5abfe2978896b77dc2604e8635d"},
{file = "numpy-1.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:dec198619b7dbd6db58603cd256e092bcadef22a796f778bf87f8592b468441d"},
{file = "numpy-1.23.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f41f5bf20d9a521f8cab3a34557cd77b6f205ab2116651f12959714494268b0"},
{file = "numpy-1.23.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:806cc25d5c43e240db709875e947076b2826f47c2c340a5a2f36da5bb10c58d6"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f9d84a24889ebb4c641a9b99e54adb8cab50972f0166a3abc14c3b93163f074"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c403c81bb8ffb1c993d0165a11493fd4bf1353d258f6997b3ee288b0a48fce77"},
{file = "numpy-1.23.2-cp39-cp39-win32.whl", hash = "sha256:cf8c6aed12a935abf2e290860af8e77b26a042eb7f2582ff83dc7ed5f963340c"},
{file = "numpy-1.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:5e28cd64624dc2354a349152599e55308eb6ca95a13ce6a7d5679ebff2962913"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:806970e69106556d1dd200e26647e9bee5e2b3f1814f9da104a943e8d548ca38"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd879d3ca4b6f39b7770829f73278b7c5e248c91d538aab1e506c628353e47f"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:be6b350dfbc7f708d9d853663772a9310783ea58f6035eec649fb9c4371b5389"},
{file = "numpy-1.23.2.tar.gz", hash = "sha256:b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2.1-py2.py3-none-any.whl", hash = "sha256:220f481bdafa09c3955dfbdddb7b57780e9a94f5127e35456a48589b9e0c0197"},
{file = "pytz-2022.2.1.tar.gz", hash = "sha256:cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
{file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.39.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bf9af9b089ef95cb1ac0df80a43f8144aa9095d10f282cb5c19643ff88a6a79d"},
{file = "shap-0.39.0-cp37-cp37m-win_amd64.whl", hash = "sha256:b44f9fbb7349f5406b98b4ec24c672f8fe932606bb7574a8aae2238410c55289"},
{file = "shap-0.39.0-cp38-cp38-win_amd64.whl", hash = "sha256:c0d51b44c15eae1c12e51ed498f898cfc5e12d6be7e0d4f733ce6453f6ec85a4"},
{file = "shap-0.39.0.tar.gz", hash = "sha256:0196a6c12cc98f8b48ce9c5968550902432b80290da6fa7be8655441a1c6251a"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.4.0-py3-none-any.whl", hash = "sha256:b94fed36d725cfabc6d09ed5886913e35eed9009766a1af1d5941b9da3a94aaa"},
{file = "stack_data-0.4.0.tar.gz", hash = "sha256:a90ae7e260f7d15aefeceb46f0a028d4ccb9eb8856475c53e341945342d41ea7"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tokenize-rt = [
{file = "tokenize_rt-4.2.1-py2.py3-none-any.whl", hash = "sha256:08a27fa032a81cf45e8858d0ac706004fcd523e8463415ddf1442be38e204ea8"},
{file = "tokenize_rt-4.2.1.tar.gz", hash = "sha256:0d4f69026fed520f8a1e0103aa36c406ef4661417f20ca643f913e33531b3b94"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.2-py3-none-any.whl", hash = "sha256:966bd61443926b6adcc0abef9f499c48bdeda181c333b0f49842d7385d440579"},
{file = "widgetsnbextension-4.0.2.tar.gz", hash = "sha256:07f0e8582f920b24316cef16490f1aeb498f2c875d48980540e5c5dbf0ff5e2d"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.8"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "coverage"
version = "6.4.4"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.12.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "*"
joblib = ">=0.13.0"
lightgbm = "*"
numba = "!=0.42.1"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.40.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.10.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.36.0"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.1"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.13.0"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format-nongpl = ["webcolors (>=1.11)", "uri-template", "rfc3986-validator (>0.1.0)", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
format = ["webcolors (>=1.11)", "uri-template", "rfc3987", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.2"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.2"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
dev = ["paramiko", "psutil", "pytest", "pytest-cov", "pytest-mock", "pytest-timeout"]
docs = ["Sphinx (>=3.0.0)", "sphinx-rtd-theme (>=0.5.0)"]
ssh = ["paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "3.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["virtualenv", "pytest-xdist", "six", "process-tests", "hunter", "fields"]
[[package]]
name = "pytest-split"
version = "0.8.0"
description = "Pytest plugin which splits the test suite to equally sized sub suites based on test execution time."
category = "dev"
optional = false
python-versions = ">=3.7.1,<4.0"
[package.dependencies]
pytest = ">=5,<8"
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2.1"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "dev"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.3"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.39.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["pyod", "pytest", "sphinx-rtd-theme", "sentencepiece", "transformers", "lightgbm", "torch", "numpydoc", "nbsphinx", "opencv-python", "ipython", "pytest-mpl", "matplotlib", "xgboost", "catboost", "sphinx", "pytest-cov", "pyspark", "lime"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "stack-data"
version = "0.4.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tokenize-rt"
version = "4.2.1"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.6.1"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "dev"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "4.0.2"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "9cc06ba8e5b31119a86589b42975b066d7131842bebe43be8057fbfd0f1bd6da"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.8-py2.py3-none-any.whl", hash = "sha256:e3305297c744ae53ffa032c45dc347286165e4ffce6875dc662b205db0623d86"},
{file = "asttokens-2.0.8.tar.gz", hash = "sha256:c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
coverage = [
{file = "coverage-6.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e7b4da9bafad21ea45a714d3ea6f3e1679099e420c8741c74905b92ee9bfa7cc"},
{file = "coverage-6.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fde17bc42e0716c94bf19d92e4c9f5a00c5feb401f5bc01101fdf2a8b7cacf60"},
{file = "coverage-6.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cdbb0d89923c80dbd435b9cf8bba0ff55585a3cdb28cbec65f376c041472c60d"},
{file = "coverage-6.4.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:67f9346aeebea54e845d29b487eb38ec95f2ecf3558a3cffb26ee3f0dcc3e760"},
{file = "coverage-6.4.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42c499c14efd858b98c4e03595bf914089b98400d30789511577aa44607a1b74"},
{file = "coverage-6.4.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c35cca192ba700979d20ac43024a82b9b32a60da2f983bec6c0f5b84aead635c"},
{file = "coverage-6.4.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:9cc4f107009bca5a81caef2fca843dbec4215c05e917a59dec0c8db5cff1d2aa"},
{file = "coverage-6.4.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5f444627b3664b80d078c05fe6a850dd711beeb90d26731f11d492dcbadb6973"},
{file = "coverage-6.4.4-cp310-cp310-win32.whl", hash = "sha256:66e6df3ac4659a435677d8cd40e8eb1ac7219345d27c41145991ee9bf4b806a0"},
{file = "coverage-6.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:35ef1f8d8a7a275aa7410d2f2c60fa6443f4a64fae9be671ec0696a68525b875"},
{file = "coverage-6.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c1328d0c2f194ffda30a45f11058c02410e679456276bfa0bbe0b0ee87225fac"},
{file = "coverage-6.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:61b993f3998ee384935ee423c3d40894e93277f12482f6e777642a0141f55782"},
{file = "coverage-6.4.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d5dd4b8e9cd0deb60e6fcc7b0647cbc1da6c33b9e786f9c79721fd303994832f"},
{file = "coverage-6.4.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7026f5afe0d1a933685d8f2169d7c2d2e624f6255fb584ca99ccca8c0e966fd7"},
{file = "coverage-6.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9c7b9b498eb0c0d48b4c2abc0e10c2d78912203f972e0e63e3c9dc21f15abdaa"},
{file = "coverage-6.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:ee2b2fb6eb4ace35805f434e0f6409444e1466a47f620d1d5763a22600f0f892"},
{file = "coverage-6.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ab066f5ab67059d1f1000b5e1aa8bbd75b6ed1fc0014559aea41a9eb66fc2ce0"},
{file = "coverage-6.4.4-cp311-cp311-win32.whl", hash = "sha256:9d6e1f3185cbfd3d91ac77ea065d85d5215d3dfa45b191d14ddfcd952fa53796"},
{file = "coverage-6.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e3d3c4cc38b2882f9a15bafd30aec079582b819bec1b8afdbde8f7797008108a"},
{file = "coverage-6.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a095aa0a996ea08b10580908e88fbaf81ecf798e923bbe64fb98d1807db3d68a"},
{file = "coverage-6.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ef6f44409ab02e202b31a05dd6666797f9de2aa2b4b3534e9d450e42dea5e817"},
{file = "coverage-6.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b7101938584d67e6f45f0015b60e24a95bf8dea19836b1709a80342e01b472f"},
{file = "coverage-6.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14a32ec68d721c3d714d9b105c7acf8e0f8a4f4734c811eda75ff3718570b5e3"},
{file = "coverage-6.4.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:6a864733b22d3081749450466ac80698fe39c91cb6849b2ef8752fd7482011f3"},
{file = "coverage-6.4.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:08002f9251f51afdcc5e3adf5d5d66bb490ae893d9e21359b085f0e03390a820"},
{file = "coverage-6.4.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:a3b2752de32c455f2521a51bd3ffb53c5b3ae92736afde67ce83477f5c1dd928"},
{file = "coverage-6.4.4-cp37-cp37m-win32.whl", hash = "sha256:f855b39e4f75abd0dfbcf74a82e84ae3fc260d523fcb3532786bcbbcb158322c"},
{file = "coverage-6.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:ee6ae6bbcac0786807295e9687169fba80cb0617852b2fa118a99667e8e6815d"},
{file = "coverage-6.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:564cd0f5b5470094df06fab676c6d77547abfdcb09b6c29c8a97c41ad03b103c"},
{file = "coverage-6.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cbbb0e4cd8ddcd5ef47641cfac97d8473ab6b132dd9a46bacb18872828031685"},
{file = "coverage-6.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6113e4df2fa73b80f77663445be6d567913fb3b82a86ceb64e44ae0e4b695de1"},
{file = "coverage-6.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8d032bfc562a52318ae05047a6eb801ff31ccee172dc0d2504614e911d8fa83e"},
{file = "coverage-6.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e431e305a1f3126477abe9a184624a85308da8edf8486a863601d58419d26ffa"},
{file = "coverage-6.4.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:cf2afe83a53f77aec067033199797832617890e15bed42f4a1a93ea24794ae3e"},
{file = "coverage-6.4.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:783bc7c4ee524039ca13b6d9b4186a67f8e63d91342c713e88c1865a38d0892a"},
{file = "coverage-6.4.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:ff934ced84054b9018665ca3967fc48e1ac99e811f6cc99ea65978e1d384454b"},
{file = "coverage-6.4.4-cp38-cp38-win32.whl", hash = "sha256:e1fabd473566fce2cf18ea41171d92814e4ef1495e04471786cbc943b89a3781"},
{file = "coverage-6.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:4179502f210ebed3ccfe2f78bf8e2d59e50b297b598b100d6c6e3341053066a2"},
{file = "coverage-6.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:98c0b9e9b572893cdb0a00e66cf961a238f8d870d4e1dc8e679eb8bdc2eb1b86"},
{file = "coverage-6.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fc600f6ec19b273da1d85817eda339fb46ce9eef3e89f220055d8696e0a06908"},
{file = "coverage-6.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7a98d6bf6d4ca5c07a600c7b4e0c5350cd483c85c736c522b786be90ea5bac4f"},
{file = "coverage-6.4.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:01778769097dbd705a24e221f42be885c544bb91251747a8a3efdec6eb4788f2"},
{file = "coverage-6.4.4-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dfa0b97eb904255e2ab24166071b27408f1f69c8fbda58e9c0972804851e0558"},
{file = "coverage-6.4.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:fcbe3d9a53e013f8ab88734d7e517eb2cd06b7e689bedf22c0eb68db5e4a0a19"},
{file = "coverage-6.4.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:15e38d853ee224e92ccc9a851457fb1e1f12d7a5df5ae44544ce7863691c7a0d"},
{file = "coverage-6.4.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:6913dddee2deff8ab2512639c5168c3e80b3ebb0f818fed22048ee46f735351a"},
{file = "coverage-6.4.4-cp39-cp39-win32.whl", hash = "sha256:354df19fefd03b9a13132fa6643527ef7905712109d9c1c1903f2133d3a4e145"},
{file = "coverage-6.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:1238b08f3576201ebf41f7c20bf59baa0d05da941b123c6656e42cdb668e9827"},
{file = "coverage-6.4.4-pp36.pp37.pp38-none-any.whl", hash = "sha256:f67cf9f406cf0d2f08a3515ce2db5b82625a7257f88aad87904674def6ddaec1"},
{file = "coverage-6.4.4.tar.gz", hash = "sha256:e16c45b726acb780e1e6f88b286d3c10b3914ab03438f32117c4aa52d7f30d58"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.12.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c8c6d2c34f98eb506d4ac48f705f6d959ccdfdeada9a0cc555afcba3233a4d0a"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c9493076c801110ebd9e15a7d162e768d33445367272a8b4f88c70898efa7a8d"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5c5c278771bf405ddf0790fc72149af9d05aec095e00c5ad47e08ab9e8da47d9"},
{file = "econml-0.12.0-cp36-cp36m-win32.whl", hash = "sha256:0477610ae12923a98bc23bdff4f9f5be12c692704d5ec9ebea8f5f4c3010b9ff"},
{file = "econml-0.12.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2cea261af4315d3ec4cb6a91ab6b4aa664b3d3c6f8d235c8778351fb259d69c2"},
{file = "econml-0.12.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3962e0f497c58724269ebf74a295dd8e2f2494e6cff00f3fc582b13b73d9dda8"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:70ce227be0b91fd463d7a48ab4b58aa562793efa779f74254a38a03c42bf6d9e"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:80f78f25b259991872e66665ca64cf108fc405b6d6cf20ebbff30fae392b3258"},
{file = "econml-0.12.0-cp37-cp37m-win32.whl", hash = "sha256:1e780e2e4dbabcff65c2ed488b8751203a6f7bfe0c9c8fd12a0a930b273dbafe"},
{file = "econml-0.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7248c62daa5b8662e1368fe3926a9a7e5ab88c1ee29ae492b7362a9124151071"},
{file = "econml-0.12.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aacdb2c96ea2377f7c4f14fbf3fb6ded5edfccf17decc706b5147564129938b4"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f87a21636872e32e627093711c65d1333be590f0aef0fa2477717442ada57dfb"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1763951ca85b3f34072d0f44e31cebb18e91a281fc94212db9b08124f0efe206"},
{file = "econml-0.12.0-cp38-cp38-win32.whl", hash = "sha256:2e4339e0dcbbc280ac95c4ac446026853f77cd22de9a4c7421b933e717ce8626"},
{file = "econml-0.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:473377685874bf9725a4c6fdd571a858d942abd8a2f4664050da426322d322c7"},
{file = "econml-0.12.0.tar.gz", hash = "sha256:4a8196d6c785e9e0e42b2fa35dd058eb37166c75dcdb90ada953feea6fbb854b"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
{file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.36.0-py3-none-any.whl", hash = "sha256:cb91ef8d5a435d90aeb3ab814b2548c6b515df5bc13b4c5adaa23778f2f79823"},
{file = "fonttools-4.36.0.zip", hash = "sha256:e637d2fe06bddabbfc488e02ef32d04d561e3c71e9ba11abc7782ea753ceb218"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.1-py3-none-any.whl", hash = "sha256:fc0744df3a964ecfd68a6d2debe547fe89db252b8d7bb3db5740aba72edb0e6c"},
{file = "ipywidgets-8.0.1.tar.gz", hash = "sha256:1a296094203309e834f2781a275214d255ac5d266bbfa602f9f6915e1806614c"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.13.0-py3-none-any.whl", hash = "sha256:870a61bb45050b81103faf6a4be00a0a906e06636ffcf0b84f5a2e51faf901ff"},
{file = "jsonschema-4.13.0.tar.gz", hash = "sha256:3776512df4f53f74e6e28fe35717b5b223c1756875486984a31bc9165e7fc920"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.2-py3-none-any.whl", hash = "sha256:98303a281f4004670cdcea2ef4aecba19c580adc297664c593f967025625c8c5"},
{file = "jupyterlab_widgets-3.0.2.tar.gz", hash = "sha256:47ab54cd165aa0cb3bcef1232d77471580cd2c36bbe2153fc5ba31e26ad87320"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e603ca1fb47b913942f3e660a15e55a9ebca906857edfea476ae5f0fe9b457d5"},
{file = "numpy-1.23.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:633679a472934b1c20a12ed0c9a6c9eb167fbb4cb89031939bfd03dd9dbc62b8"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e5226674f6ea79e14e3b91bfbc153fdf3ac13f5cc54ee7bc8fdbe820a32da0"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc02c0235b261925102b1bd586579b7158e9d0d07ecb61148a1799214a4afd5"},
{file = "numpy-1.23.2-cp310-cp310-win32.whl", hash = "sha256:df28dda02c9328e122661f399f7655cdcbcf22ea42daa3650a26bce08a187450"},
{file = "numpy-1.23.2-cp310-cp310-win_amd64.whl", hash = "sha256:8ebf7e194b89bc66b78475bd3624d92980fca4e5bb86dda08d677d786fefc414"},
{file = "numpy-1.23.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dc76bca1ca98f4b122114435f83f1fcf3c0fe48e4e6f660e07996abf2f53903c"},
{file = "numpy-1.23.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ecfdd68d334a6b97472ed032b5b37a30d8217c097acfff15e8452c710e775524"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5593f67e66dea4e237f5af998d31a43e447786b2154ba1ad833676c788f37cde"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac987b35df8c2a2eab495ee206658117e9ce867acf3ccb376a19e83070e69418"},
{file = "numpy-1.23.2-cp311-cp311-win32.whl", hash = "sha256:d98addfd3c8728ee8b2c49126f3c44c703e2b005d4a95998e2167af176a9e722"},
{file = "numpy-1.23.2-cp311-cp311-win_amd64.whl", hash = "sha256:8ecb818231afe5f0f568c81f12ce50f2b828ff2b27487520d85eb44c71313b9e"},
{file = "numpy-1.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:909c56c4d4341ec8315291a105169d8aae732cfb4c250fbc375a1efb7a844f8f"},
{file = "numpy-1.23.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8247f01c4721479e482cc2f9f7d973f3f47810cbc8c65e38fd1bbd3141cc9842"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8b97a8a87cadcd3f94659b4ef6ec056261fa1e1c3317f4193ac231d4df70215"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd5b7ccae24e3d8501ee5563e82febc1771e73bd268eef82a1e8d2b4d556ae66"},
{file = "numpy-1.23.2-cp38-cp38-win32.whl", hash = "sha256:9b83d48e464f393d46e8dd8171687394d39bc5abfe2978896b77dc2604e8635d"},
{file = "numpy-1.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:dec198619b7dbd6db58603cd256e092bcadef22a796f778bf87f8592b468441d"},
{file = "numpy-1.23.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f41f5bf20d9a521f8cab3a34557cd77b6f205ab2116651f12959714494268b0"},
{file = "numpy-1.23.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:806cc25d5c43e240db709875e947076b2826f47c2c340a5a2f36da5bb10c58d6"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f9d84a24889ebb4c641a9b99e54adb8cab50972f0166a3abc14c3b93163f074"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c403c81bb8ffb1c993d0165a11493fd4bf1353d258f6997b3ee288b0a48fce77"},
{file = "numpy-1.23.2-cp39-cp39-win32.whl", hash = "sha256:cf8c6aed12a935abf2e290860af8e77b26a042eb7f2582ff83dc7ed5f963340c"},
{file = "numpy-1.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:5e28cd64624dc2354a349152599e55308eb6ca95a13ce6a7d5679ebff2962913"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:806970e69106556d1dd200e26647e9bee5e2b3f1814f9da104a943e8d548ca38"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd879d3ca4b6f39b7770829f73278b7c5e248c91d538aab1e506c628353e47f"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:be6b350dfbc7f708d9d853663772a9310783ea58f6035eec649fb9c4371b5389"},
{file = "numpy-1.23.2.tar.gz", hash = "sha256:b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
pytest-cov = [
{file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
{file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
]
pytest-split = [
{file = "pytest-split-0.8.0.tar.gz", hash = "sha256:8571a3f60ca8656c698ed86b0a3212bb9e79586ecb201daef9988c336ff0e6ff"},
{file = "pytest_split-0.8.0-py3-none-any.whl", hash = "sha256:2e06b8b1ab7ceb19d0b001548271abaf91d12415a8687086cf40581c555d309f"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2.1-py2.py3-none-any.whl", hash = "sha256:220f481bdafa09c3955dfbdddb7b57780e9a94f5127e35456a48589b9e0c0197"},
{file = "pytz-2022.2.1.tar.gz", hash = "sha256:cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
{file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.39.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bf9af9b089ef95cb1ac0df80a43f8144aa9095d10f282cb5c19643ff88a6a79d"},
{file = "shap-0.39.0-cp37-cp37m-win_amd64.whl", hash = "sha256:b44f9fbb7349f5406b98b4ec24c672f8fe932606bb7574a8aae2238410c55289"},
{file = "shap-0.39.0-cp38-cp38-win_amd64.whl", hash = "sha256:c0d51b44c15eae1c12e51ed498f898cfc5e12d6be7e0d4f733ce6453f6ec85a4"},
{file = "shap-0.39.0.tar.gz", hash = "sha256:0196a6c12cc98f8b48ce9c5968550902432b80290da6fa7be8655441a1c6251a"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.4.0-py3-none-any.whl", hash = "sha256:b94fed36d725cfabc6d09ed5886913e35eed9009766a1af1d5941b9da3a94aaa"},
{file = "stack_data-0.4.0.tar.gz", hash = "sha256:a90ae7e260f7d15aefeceb46f0a028d4ccb9eb8856475c53e341945342d41ea7"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tokenize-rt = [
{file = "tokenize_rt-4.2.1-py2.py3-none-any.whl", hash = "sha256:08a27fa032a81cf45e8858d0ac706004fcd523e8463415ddf1442be38e204ea8"},
{file = "tokenize_rt-4.2.1.tar.gz", hash = "sha256:0d4f69026fed520f8a1e0103aa36c406ef4661417f20ca643f913e33531b3b94"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.2-py3-none-any.whl", hash = "sha256:966bd61443926b6adcc0abef9f499c48bdeda181c333b0f49842d7385d440579"},
{file = "widgetsnbextension-4.0.2.tar.gz", hash = "sha256:07f0e8582f920b24316cef16490f1aeb498f2c875d48980540e5c5dbf0ff5e2d"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| darthtrevino | ead8d47102f0ac6db51d84432874c331fb84f3cb | 2f2463be424093d720ebfa9cc625a06160a44cfe | Are the changes to lib versions in poetry.lock due to splitting the tests into groups, or are they unrelated changes? | emrekiciman | 335 |
py-why/dowhy | 615 | CI: Split tests into groups to speed them up | * Mark some slow tests as advanced
* Split tests into 4 shards in CI | null | 2022-08-25 22:09:18+00:00 | 2022-09-01 22:51:03+00:00 | poetry.lock | [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.8"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.12.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "*"
joblib = ">=0.13.0"
lightgbm = "*"
numba = "!=0.42.1"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.40.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.10.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.36.0"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.1"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.13.0"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format-nongpl = ["webcolors (>=1.11)", "uri-template", "rfc3986-validator (>0.1.0)", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
format = ["webcolors (>=1.11)", "uri-template", "rfc3987", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.2"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.2"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
dev = ["paramiko", "psutil", "pytest", "pytest-cov", "pytest-mock", "pytest-timeout"]
docs = ["Sphinx (>=3.0.0)", "sphinx-rtd-theme (>=0.5.0)"]
ssh = ["paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2.1"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "dev"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.3"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.39.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["pyod", "pytest", "sphinx-rtd-theme", "sentencepiece", "transformers", "lightgbm", "torch", "numpydoc", "nbsphinx", "opencv-python", "ipython", "pytest-mpl", "matplotlib", "xgboost", "catboost", "sphinx", "pytest-cov", "pyspark", "lime"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "stack-data"
version = "0.4.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tokenize-rt"
version = "4.2.1"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.6.1"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "dev"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "4.0.2"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "cfddbd7903c034dac2384b07f8c0cd40912973cd783f0425cc2eb898e444e01f"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.8-py2.py3-none-any.whl", hash = "sha256:e3305297c744ae53ffa032c45dc347286165e4ffce6875dc662b205db0623d86"},
{file = "asttokens-2.0.8.tar.gz", hash = "sha256:c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.12.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c8c6d2c34f98eb506d4ac48f705f6d959ccdfdeada9a0cc555afcba3233a4d0a"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c9493076c801110ebd9e15a7d162e768d33445367272a8b4f88c70898efa7a8d"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5c5c278771bf405ddf0790fc72149af9d05aec095e00c5ad47e08ab9e8da47d9"},
{file = "econml-0.12.0-cp36-cp36m-win32.whl", hash = "sha256:0477610ae12923a98bc23bdff4f9f5be12c692704d5ec9ebea8f5f4c3010b9ff"},
{file = "econml-0.12.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2cea261af4315d3ec4cb6a91ab6b4aa664b3d3c6f8d235c8778351fb259d69c2"},
{file = "econml-0.12.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3962e0f497c58724269ebf74a295dd8e2f2494e6cff00f3fc582b13b73d9dda8"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:70ce227be0b91fd463d7a48ab4b58aa562793efa779f74254a38a03c42bf6d9e"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:80f78f25b259991872e66665ca64cf108fc405b6d6cf20ebbff30fae392b3258"},
{file = "econml-0.12.0-cp37-cp37m-win32.whl", hash = "sha256:1e780e2e4dbabcff65c2ed488b8751203a6f7bfe0c9c8fd12a0a930b273dbafe"},
{file = "econml-0.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7248c62daa5b8662e1368fe3926a9a7e5ab88c1ee29ae492b7362a9124151071"},
{file = "econml-0.12.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aacdb2c96ea2377f7c4f14fbf3fb6ded5edfccf17decc706b5147564129938b4"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f87a21636872e32e627093711c65d1333be590f0aef0fa2477717442ada57dfb"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1763951ca85b3f34072d0f44e31cebb18e91a281fc94212db9b08124f0efe206"},
{file = "econml-0.12.0-cp38-cp38-win32.whl", hash = "sha256:2e4339e0dcbbc280ac95c4ac446026853f77cd22de9a4c7421b933e717ce8626"},
{file = "econml-0.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:473377685874bf9725a4c6fdd571a858d942abd8a2f4664050da426322d322c7"},
{file = "econml-0.12.0.tar.gz", hash = "sha256:4a8196d6c785e9e0e42b2fa35dd058eb37166c75dcdb90ada953feea6fbb854b"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
{file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.36.0-py3-none-any.whl", hash = "sha256:cb91ef8d5a435d90aeb3ab814b2548c6b515df5bc13b4c5adaa23778f2f79823"},
{file = "fonttools-4.36.0.zip", hash = "sha256:e637d2fe06bddabbfc488e02ef32d04d561e3c71e9ba11abc7782ea753ceb218"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.1-py3-none-any.whl", hash = "sha256:fc0744df3a964ecfd68a6d2debe547fe89db252b8d7bb3db5740aba72edb0e6c"},
{file = "ipywidgets-8.0.1.tar.gz", hash = "sha256:1a296094203309e834f2781a275214d255ac5d266bbfa602f9f6915e1806614c"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.13.0-py3-none-any.whl", hash = "sha256:870a61bb45050b81103faf6a4be00a0a906e06636ffcf0b84f5a2e51faf901ff"},
{file = "jsonschema-4.13.0.tar.gz", hash = "sha256:3776512df4f53f74e6e28fe35717b5b223c1756875486984a31bc9165e7fc920"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.2-py3-none-any.whl", hash = "sha256:98303a281f4004670cdcea2ef4aecba19c580adc297664c593f967025625c8c5"},
{file = "jupyterlab_widgets-3.0.2.tar.gz", hash = "sha256:47ab54cd165aa0cb3bcef1232d77471580cd2c36bbe2153fc5ba31e26ad87320"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e603ca1fb47b913942f3e660a15e55a9ebca906857edfea476ae5f0fe9b457d5"},
{file = "numpy-1.23.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:633679a472934b1c20a12ed0c9a6c9eb167fbb4cb89031939bfd03dd9dbc62b8"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e5226674f6ea79e14e3b91bfbc153fdf3ac13f5cc54ee7bc8fdbe820a32da0"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc02c0235b261925102b1bd586579b7158e9d0d07ecb61148a1799214a4afd5"},
{file = "numpy-1.23.2-cp310-cp310-win32.whl", hash = "sha256:df28dda02c9328e122661f399f7655cdcbcf22ea42daa3650a26bce08a187450"},
{file = "numpy-1.23.2-cp310-cp310-win_amd64.whl", hash = "sha256:8ebf7e194b89bc66b78475bd3624d92980fca4e5bb86dda08d677d786fefc414"},
{file = "numpy-1.23.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dc76bca1ca98f4b122114435f83f1fcf3c0fe48e4e6f660e07996abf2f53903c"},
{file = "numpy-1.23.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ecfdd68d334a6b97472ed032b5b37a30d8217c097acfff15e8452c710e775524"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5593f67e66dea4e237f5af998d31a43e447786b2154ba1ad833676c788f37cde"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac987b35df8c2a2eab495ee206658117e9ce867acf3ccb376a19e83070e69418"},
{file = "numpy-1.23.2-cp311-cp311-win32.whl", hash = "sha256:d98addfd3c8728ee8b2c49126f3c44c703e2b005d4a95998e2167af176a9e722"},
{file = "numpy-1.23.2-cp311-cp311-win_amd64.whl", hash = "sha256:8ecb818231afe5f0f568c81f12ce50f2b828ff2b27487520d85eb44c71313b9e"},
{file = "numpy-1.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:909c56c4d4341ec8315291a105169d8aae732cfb4c250fbc375a1efb7a844f8f"},
{file = "numpy-1.23.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8247f01c4721479e482cc2f9f7d973f3f47810cbc8c65e38fd1bbd3141cc9842"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8b97a8a87cadcd3f94659b4ef6ec056261fa1e1c3317f4193ac231d4df70215"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd5b7ccae24e3d8501ee5563e82febc1771e73bd268eef82a1e8d2b4d556ae66"},
{file = "numpy-1.23.2-cp38-cp38-win32.whl", hash = "sha256:9b83d48e464f393d46e8dd8171687394d39bc5abfe2978896b77dc2604e8635d"},
{file = "numpy-1.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:dec198619b7dbd6db58603cd256e092bcadef22a796f778bf87f8592b468441d"},
{file = "numpy-1.23.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f41f5bf20d9a521f8cab3a34557cd77b6f205ab2116651f12959714494268b0"},
{file = "numpy-1.23.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:806cc25d5c43e240db709875e947076b2826f47c2c340a5a2f36da5bb10c58d6"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f9d84a24889ebb4c641a9b99e54adb8cab50972f0166a3abc14c3b93163f074"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c403c81bb8ffb1c993d0165a11493fd4bf1353d258f6997b3ee288b0a48fce77"},
{file = "numpy-1.23.2-cp39-cp39-win32.whl", hash = "sha256:cf8c6aed12a935abf2e290860af8e77b26a042eb7f2582ff83dc7ed5f963340c"},
{file = "numpy-1.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:5e28cd64624dc2354a349152599e55308eb6ca95a13ce6a7d5679ebff2962913"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:806970e69106556d1dd200e26647e9bee5e2b3f1814f9da104a943e8d548ca38"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd879d3ca4b6f39b7770829f73278b7c5e248c91d538aab1e506c628353e47f"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:be6b350dfbc7f708d9d853663772a9310783ea58f6035eec649fb9c4371b5389"},
{file = "numpy-1.23.2.tar.gz", hash = "sha256:b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2.1-py2.py3-none-any.whl", hash = "sha256:220f481bdafa09c3955dfbdddb7b57780e9a94f5127e35456a48589b9e0c0197"},
{file = "pytz-2022.2.1.tar.gz", hash = "sha256:cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
{file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.39.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bf9af9b089ef95cb1ac0df80a43f8144aa9095d10f282cb5c19643ff88a6a79d"},
{file = "shap-0.39.0-cp37-cp37m-win_amd64.whl", hash = "sha256:b44f9fbb7349f5406b98b4ec24c672f8fe932606bb7574a8aae2238410c55289"},
{file = "shap-0.39.0-cp38-cp38-win_amd64.whl", hash = "sha256:c0d51b44c15eae1c12e51ed498f898cfc5e12d6be7e0d4f733ce6453f6ec85a4"},
{file = "shap-0.39.0.tar.gz", hash = "sha256:0196a6c12cc98f8b48ce9c5968550902432b80290da6fa7be8655441a1c6251a"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.4.0-py3-none-any.whl", hash = "sha256:b94fed36d725cfabc6d09ed5886913e35eed9009766a1af1d5941b9da3a94aaa"},
{file = "stack_data-0.4.0.tar.gz", hash = "sha256:a90ae7e260f7d15aefeceb46f0a028d4ccb9eb8856475c53e341945342d41ea7"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tokenize-rt = [
{file = "tokenize_rt-4.2.1-py2.py3-none-any.whl", hash = "sha256:08a27fa032a81cf45e8858d0ac706004fcd523e8463415ddf1442be38e204ea8"},
{file = "tokenize_rt-4.2.1.tar.gz", hash = "sha256:0d4f69026fed520f8a1e0103aa36c406ef4661417f20ca643f913e33531b3b94"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.2-py3-none-any.whl", hash = "sha256:966bd61443926b6adcc0abef9f499c48bdeda181c333b0f49842d7385d440579"},
{file = "widgetsnbextension-4.0.2.tar.gz", hash = "sha256:07f0e8582f920b24316cef16490f1aeb498f2c875d48980540e5c5dbf0ff5e2d"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.8"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "coverage"
version = "6.4.4"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.12.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "*"
joblib = ">=0.13.0"
lightgbm = "*"
numba = "!=0.42.1"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.40.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.10.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.36.0"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.1"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.13.0"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format-nongpl = ["webcolors (>=1.11)", "uri-template", "rfc3986-validator (>0.1.0)", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
format = ["webcolors (>=1.11)", "uri-template", "rfc3987", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.2"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.2"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
dev = ["paramiko", "psutil", "pytest", "pytest-cov", "pytest-mock", "pytest-timeout"]
docs = ["Sphinx (>=3.0.0)", "sphinx-rtd-theme (>=0.5.0)"]
ssh = ["paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "3.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["virtualenv", "pytest-xdist", "six", "process-tests", "hunter", "fields"]
[[package]]
name = "pytest-split"
version = "0.8.0"
description = "Pytest plugin which splits the test suite to equally sized sub suites based on test execution time."
category = "dev"
optional = false
python-versions = ">=3.7.1,<4.0"
[package.dependencies]
pytest = ">=5,<8"
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2.1"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "dev"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.3"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.39.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["pyod", "pytest", "sphinx-rtd-theme", "sentencepiece", "transformers", "lightgbm", "torch", "numpydoc", "nbsphinx", "opencv-python", "ipython", "pytest-mpl", "matplotlib", "xgboost", "catboost", "sphinx", "pytest-cov", "pyspark", "lime"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "stack-data"
version = "0.4.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tokenize-rt"
version = "4.2.1"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.6.1"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "dev"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "4.0.2"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "9cc06ba8e5b31119a86589b42975b066d7131842bebe43be8057fbfd0f1bd6da"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.8-py2.py3-none-any.whl", hash = "sha256:e3305297c744ae53ffa032c45dc347286165e4ffce6875dc662b205db0623d86"},
{file = "asttokens-2.0.8.tar.gz", hash = "sha256:c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
coverage = [
{file = "coverage-6.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e7b4da9bafad21ea45a714d3ea6f3e1679099e420c8741c74905b92ee9bfa7cc"},
{file = "coverage-6.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fde17bc42e0716c94bf19d92e4c9f5a00c5feb401f5bc01101fdf2a8b7cacf60"},
{file = "coverage-6.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cdbb0d89923c80dbd435b9cf8bba0ff55585a3cdb28cbec65f376c041472c60d"},
{file = "coverage-6.4.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:67f9346aeebea54e845d29b487eb38ec95f2ecf3558a3cffb26ee3f0dcc3e760"},
{file = "coverage-6.4.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42c499c14efd858b98c4e03595bf914089b98400d30789511577aa44607a1b74"},
{file = "coverage-6.4.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c35cca192ba700979d20ac43024a82b9b32a60da2f983bec6c0f5b84aead635c"},
{file = "coverage-6.4.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:9cc4f107009bca5a81caef2fca843dbec4215c05e917a59dec0c8db5cff1d2aa"},
{file = "coverage-6.4.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5f444627b3664b80d078c05fe6a850dd711beeb90d26731f11d492dcbadb6973"},
{file = "coverage-6.4.4-cp310-cp310-win32.whl", hash = "sha256:66e6df3ac4659a435677d8cd40e8eb1ac7219345d27c41145991ee9bf4b806a0"},
{file = "coverage-6.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:35ef1f8d8a7a275aa7410d2f2c60fa6443f4a64fae9be671ec0696a68525b875"},
{file = "coverage-6.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c1328d0c2f194ffda30a45f11058c02410e679456276bfa0bbe0b0ee87225fac"},
{file = "coverage-6.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:61b993f3998ee384935ee423c3d40894e93277f12482f6e777642a0141f55782"},
{file = "coverage-6.4.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d5dd4b8e9cd0deb60e6fcc7b0647cbc1da6c33b9e786f9c79721fd303994832f"},
{file = "coverage-6.4.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7026f5afe0d1a933685d8f2169d7c2d2e624f6255fb584ca99ccca8c0e966fd7"},
{file = "coverage-6.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9c7b9b498eb0c0d48b4c2abc0e10c2d78912203f972e0e63e3c9dc21f15abdaa"},
{file = "coverage-6.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:ee2b2fb6eb4ace35805f434e0f6409444e1466a47f620d1d5763a22600f0f892"},
{file = "coverage-6.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ab066f5ab67059d1f1000b5e1aa8bbd75b6ed1fc0014559aea41a9eb66fc2ce0"},
{file = "coverage-6.4.4-cp311-cp311-win32.whl", hash = "sha256:9d6e1f3185cbfd3d91ac77ea065d85d5215d3dfa45b191d14ddfcd952fa53796"},
{file = "coverage-6.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e3d3c4cc38b2882f9a15bafd30aec079582b819bec1b8afdbde8f7797008108a"},
{file = "coverage-6.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a095aa0a996ea08b10580908e88fbaf81ecf798e923bbe64fb98d1807db3d68a"},
{file = "coverage-6.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ef6f44409ab02e202b31a05dd6666797f9de2aa2b4b3534e9d450e42dea5e817"},
{file = "coverage-6.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b7101938584d67e6f45f0015b60e24a95bf8dea19836b1709a80342e01b472f"},
{file = "coverage-6.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14a32ec68d721c3d714d9b105c7acf8e0f8a4f4734c811eda75ff3718570b5e3"},
{file = "coverage-6.4.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:6a864733b22d3081749450466ac80698fe39c91cb6849b2ef8752fd7482011f3"},
{file = "coverage-6.4.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:08002f9251f51afdcc5e3adf5d5d66bb490ae893d9e21359b085f0e03390a820"},
{file = "coverage-6.4.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:a3b2752de32c455f2521a51bd3ffb53c5b3ae92736afde67ce83477f5c1dd928"},
{file = "coverage-6.4.4-cp37-cp37m-win32.whl", hash = "sha256:f855b39e4f75abd0dfbcf74a82e84ae3fc260d523fcb3532786bcbbcb158322c"},
{file = "coverage-6.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:ee6ae6bbcac0786807295e9687169fba80cb0617852b2fa118a99667e8e6815d"},
{file = "coverage-6.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:564cd0f5b5470094df06fab676c6d77547abfdcb09b6c29c8a97c41ad03b103c"},
{file = "coverage-6.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cbbb0e4cd8ddcd5ef47641cfac97d8473ab6b132dd9a46bacb18872828031685"},
{file = "coverage-6.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6113e4df2fa73b80f77663445be6d567913fb3b82a86ceb64e44ae0e4b695de1"},
{file = "coverage-6.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8d032bfc562a52318ae05047a6eb801ff31ccee172dc0d2504614e911d8fa83e"},
{file = "coverage-6.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e431e305a1f3126477abe9a184624a85308da8edf8486a863601d58419d26ffa"},
{file = "coverage-6.4.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:cf2afe83a53f77aec067033199797832617890e15bed42f4a1a93ea24794ae3e"},
{file = "coverage-6.4.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:783bc7c4ee524039ca13b6d9b4186a67f8e63d91342c713e88c1865a38d0892a"},
{file = "coverage-6.4.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:ff934ced84054b9018665ca3967fc48e1ac99e811f6cc99ea65978e1d384454b"},
{file = "coverage-6.4.4-cp38-cp38-win32.whl", hash = "sha256:e1fabd473566fce2cf18ea41171d92814e4ef1495e04471786cbc943b89a3781"},
{file = "coverage-6.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:4179502f210ebed3ccfe2f78bf8e2d59e50b297b598b100d6c6e3341053066a2"},
{file = "coverage-6.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:98c0b9e9b572893cdb0a00e66cf961a238f8d870d4e1dc8e679eb8bdc2eb1b86"},
{file = "coverage-6.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fc600f6ec19b273da1d85817eda339fb46ce9eef3e89f220055d8696e0a06908"},
{file = "coverage-6.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7a98d6bf6d4ca5c07a600c7b4e0c5350cd483c85c736c522b786be90ea5bac4f"},
{file = "coverage-6.4.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:01778769097dbd705a24e221f42be885c544bb91251747a8a3efdec6eb4788f2"},
{file = "coverage-6.4.4-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dfa0b97eb904255e2ab24166071b27408f1f69c8fbda58e9c0972804851e0558"},
{file = "coverage-6.4.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:fcbe3d9a53e013f8ab88734d7e517eb2cd06b7e689bedf22c0eb68db5e4a0a19"},
{file = "coverage-6.4.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:15e38d853ee224e92ccc9a851457fb1e1f12d7a5df5ae44544ce7863691c7a0d"},
{file = "coverage-6.4.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:6913dddee2deff8ab2512639c5168c3e80b3ebb0f818fed22048ee46f735351a"},
{file = "coverage-6.4.4-cp39-cp39-win32.whl", hash = "sha256:354df19fefd03b9a13132fa6643527ef7905712109d9c1c1903f2133d3a4e145"},
{file = "coverage-6.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:1238b08f3576201ebf41f7c20bf59baa0d05da941b123c6656e42cdb668e9827"},
{file = "coverage-6.4.4-pp36.pp37.pp38-none-any.whl", hash = "sha256:f67cf9f406cf0d2f08a3515ce2db5b82625a7257f88aad87904674def6ddaec1"},
{file = "coverage-6.4.4.tar.gz", hash = "sha256:e16c45b726acb780e1e6f88b286d3c10b3914ab03438f32117c4aa52d7f30d58"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.12.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c8c6d2c34f98eb506d4ac48f705f6d959ccdfdeada9a0cc555afcba3233a4d0a"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c9493076c801110ebd9e15a7d162e768d33445367272a8b4f88c70898efa7a8d"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5c5c278771bf405ddf0790fc72149af9d05aec095e00c5ad47e08ab9e8da47d9"},
{file = "econml-0.12.0-cp36-cp36m-win32.whl", hash = "sha256:0477610ae12923a98bc23bdff4f9f5be12c692704d5ec9ebea8f5f4c3010b9ff"},
{file = "econml-0.12.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2cea261af4315d3ec4cb6a91ab6b4aa664b3d3c6f8d235c8778351fb259d69c2"},
{file = "econml-0.12.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3962e0f497c58724269ebf74a295dd8e2f2494e6cff00f3fc582b13b73d9dda8"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:70ce227be0b91fd463d7a48ab4b58aa562793efa779f74254a38a03c42bf6d9e"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:80f78f25b259991872e66665ca64cf108fc405b6d6cf20ebbff30fae392b3258"},
{file = "econml-0.12.0-cp37-cp37m-win32.whl", hash = "sha256:1e780e2e4dbabcff65c2ed488b8751203a6f7bfe0c9c8fd12a0a930b273dbafe"},
{file = "econml-0.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7248c62daa5b8662e1368fe3926a9a7e5ab88c1ee29ae492b7362a9124151071"},
{file = "econml-0.12.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aacdb2c96ea2377f7c4f14fbf3fb6ded5edfccf17decc706b5147564129938b4"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f87a21636872e32e627093711c65d1333be590f0aef0fa2477717442ada57dfb"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1763951ca85b3f34072d0f44e31cebb18e91a281fc94212db9b08124f0efe206"},
{file = "econml-0.12.0-cp38-cp38-win32.whl", hash = "sha256:2e4339e0dcbbc280ac95c4ac446026853f77cd22de9a4c7421b933e717ce8626"},
{file = "econml-0.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:473377685874bf9725a4c6fdd571a858d942abd8a2f4664050da426322d322c7"},
{file = "econml-0.12.0.tar.gz", hash = "sha256:4a8196d6c785e9e0e42b2fa35dd058eb37166c75dcdb90ada953feea6fbb854b"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
{file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.36.0-py3-none-any.whl", hash = "sha256:cb91ef8d5a435d90aeb3ab814b2548c6b515df5bc13b4c5adaa23778f2f79823"},
{file = "fonttools-4.36.0.zip", hash = "sha256:e637d2fe06bddabbfc488e02ef32d04d561e3c71e9ba11abc7782ea753ceb218"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.1-py3-none-any.whl", hash = "sha256:fc0744df3a964ecfd68a6d2debe547fe89db252b8d7bb3db5740aba72edb0e6c"},
{file = "ipywidgets-8.0.1.tar.gz", hash = "sha256:1a296094203309e834f2781a275214d255ac5d266bbfa602f9f6915e1806614c"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.13.0-py3-none-any.whl", hash = "sha256:870a61bb45050b81103faf6a4be00a0a906e06636ffcf0b84f5a2e51faf901ff"},
{file = "jsonschema-4.13.0.tar.gz", hash = "sha256:3776512df4f53f74e6e28fe35717b5b223c1756875486984a31bc9165e7fc920"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.2-py3-none-any.whl", hash = "sha256:98303a281f4004670cdcea2ef4aecba19c580adc297664c593f967025625c8c5"},
{file = "jupyterlab_widgets-3.0.2.tar.gz", hash = "sha256:47ab54cd165aa0cb3bcef1232d77471580cd2c36bbe2153fc5ba31e26ad87320"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e603ca1fb47b913942f3e660a15e55a9ebca906857edfea476ae5f0fe9b457d5"},
{file = "numpy-1.23.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:633679a472934b1c20a12ed0c9a6c9eb167fbb4cb89031939bfd03dd9dbc62b8"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e5226674f6ea79e14e3b91bfbc153fdf3ac13f5cc54ee7bc8fdbe820a32da0"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc02c0235b261925102b1bd586579b7158e9d0d07ecb61148a1799214a4afd5"},
{file = "numpy-1.23.2-cp310-cp310-win32.whl", hash = "sha256:df28dda02c9328e122661f399f7655cdcbcf22ea42daa3650a26bce08a187450"},
{file = "numpy-1.23.2-cp310-cp310-win_amd64.whl", hash = "sha256:8ebf7e194b89bc66b78475bd3624d92980fca4e5bb86dda08d677d786fefc414"},
{file = "numpy-1.23.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dc76bca1ca98f4b122114435f83f1fcf3c0fe48e4e6f660e07996abf2f53903c"},
{file = "numpy-1.23.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ecfdd68d334a6b97472ed032b5b37a30d8217c097acfff15e8452c710e775524"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5593f67e66dea4e237f5af998d31a43e447786b2154ba1ad833676c788f37cde"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac987b35df8c2a2eab495ee206658117e9ce867acf3ccb376a19e83070e69418"},
{file = "numpy-1.23.2-cp311-cp311-win32.whl", hash = "sha256:d98addfd3c8728ee8b2c49126f3c44c703e2b005d4a95998e2167af176a9e722"},
{file = "numpy-1.23.2-cp311-cp311-win_amd64.whl", hash = "sha256:8ecb818231afe5f0f568c81f12ce50f2b828ff2b27487520d85eb44c71313b9e"},
{file = "numpy-1.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:909c56c4d4341ec8315291a105169d8aae732cfb4c250fbc375a1efb7a844f8f"},
{file = "numpy-1.23.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8247f01c4721479e482cc2f9f7d973f3f47810cbc8c65e38fd1bbd3141cc9842"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8b97a8a87cadcd3f94659b4ef6ec056261fa1e1c3317f4193ac231d4df70215"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd5b7ccae24e3d8501ee5563e82febc1771e73bd268eef82a1e8d2b4d556ae66"},
{file = "numpy-1.23.2-cp38-cp38-win32.whl", hash = "sha256:9b83d48e464f393d46e8dd8171687394d39bc5abfe2978896b77dc2604e8635d"},
{file = "numpy-1.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:dec198619b7dbd6db58603cd256e092bcadef22a796f778bf87f8592b468441d"},
{file = "numpy-1.23.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f41f5bf20d9a521f8cab3a34557cd77b6f205ab2116651f12959714494268b0"},
{file = "numpy-1.23.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:806cc25d5c43e240db709875e947076b2826f47c2c340a5a2f36da5bb10c58d6"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f9d84a24889ebb4c641a9b99e54adb8cab50972f0166a3abc14c3b93163f074"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c403c81bb8ffb1c993d0165a11493fd4bf1353d258f6997b3ee288b0a48fce77"},
{file = "numpy-1.23.2-cp39-cp39-win32.whl", hash = "sha256:cf8c6aed12a935abf2e290860af8e77b26a042eb7f2582ff83dc7ed5f963340c"},
{file = "numpy-1.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:5e28cd64624dc2354a349152599e55308eb6ca95a13ce6a7d5679ebff2962913"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:806970e69106556d1dd200e26647e9bee5e2b3f1814f9da104a943e8d548ca38"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd879d3ca4b6f39b7770829f73278b7c5e248c91d538aab1e506c628353e47f"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:be6b350dfbc7f708d9d853663772a9310783ea58f6035eec649fb9c4371b5389"},
{file = "numpy-1.23.2.tar.gz", hash = "sha256:b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
pytest-cov = [
{file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
{file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
]
pytest-split = [
{file = "pytest-split-0.8.0.tar.gz", hash = "sha256:8571a3f60ca8656c698ed86b0a3212bb9e79586ecb201daef9988c336ff0e6ff"},
{file = "pytest_split-0.8.0-py3-none-any.whl", hash = "sha256:2e06b8b1ab7ceb19d0b001548271abaf91d12415a8687086cf40581c555d309f"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2.1-py2.py3-none-any.whl", hash = "sha256:220f481bdafa09c3955dfbdddb7b57780e9a94f5127e35456a48589b9e0c0197"},
{file = "pytz-2022.2.1.tar.gz", hash = "sha256:cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
{file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.39.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bf9af9b089ef95cb1ac0df80a43f8144aa9095d10f282cb5c19643ff88a6a79d"},
{file = "shap-0.39.0-cp37-cp37m-win_amd64.whl", hash = "sha256:b44f9fbb7349f5406b98b4ec24c672f8fe932606bb7574a8aae2238410c55289"},
{file = "shap-0.39.0-cp38-cp38-win_amd64.whl", hash = "sha256:c0d51b44c15eae1c12e51ed498f898cfc5e12d6be7e0d4f733ce6453f6ec85a4"},
{file = "shap-0.39.0.tar.gz", hash = "sha256:0196a6c12cc98f8b48ce9c5968550902432b80290da6fa7be8655441a1c6251a"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.4.0-py3-none-any.whl", hash = "sha256:b94fed36d725cfabc6d09ed5886913e35eed9009766a1af1d5941b9da3a94aaa"},
{file = "stack_data-0.4.0.tar.gz", hash = "sha256:a90ae7e260f7d15aefeceb46f0a028d4ccb9eb8856475c53e341945342d41ea7"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tokenize-rt = [
{file = "tokenize_rt-4.2.1-py2.py3-none-any.whl", hash = "sha256:08a27fa032a81cf45e8858d0ac706004fcd523e8463415ddf1442be38e204ea8"},
{file = "tokenize_rt-4.2.1.tar.gz", hash = "sha256:0d4f69026fed520f8a1e0103aa36c406ef4661417f20ca643f913e33531b3b94"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.2-py3-none-any.whl", hash = "sha256:966bd61443926b6adcc0abef9f499c48bdeda181c333b0f49842d7385d440579"},
{file = "widgetsnbextension-4.0.2.tar.gz", hash = "sha256:07f0e8582f920b24316cef16490f1aeb498f2c875d48980540e5c5dbf0ff5e2d"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| darthtrevino | ead8d47102f0ac6db51d84432874c331fb84f3cb | 2f2463be424093d720ebfa9cc625a06160a44cfe | Some are related to adding the new dependency, but it looks like it runs the solution engine whenever you add a dependency. We have a handful of deps that are tied to `main` branches on github, which will be resolved fresh - and there may be some path-version updates from other dependencies. | darthtrevino | 336 |
py-why/dowhy | 615 | CI: Split tests into groups to speed them up | * Mark some slow tests as advanced
* Split tests into 4 shards in CI | null | 2022-08-25 22:09:18+00:00 | 2022-09-01 22:51:03+00:00 | tests/causal_refuters/test_data_subset_refuter.py | import numpy as np
import pytest
from .base import TestRefuter
@pytest.mark.usefixtures("fixed_seed")
class TestDataSubsetRefuter(object):
@pytest.mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "iv.instrumental_variable")])
def test_refutation_data_subset_refuter_continuous(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.continuous_treatment_testsuite() # Run both
@pytest.mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "backdoor.propensity_score_matching")])
def test_refutation_data_subset_refuter_binary(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.binary_treatment_testsuite(tests_to_run="atleast-one-common-cause")
| import numpy as np
import pytest
from pytest import mark
from .base import TestRefuter
@mark.usefixtures("fixed_seed")
class TestDataSubsetRefuter(object):
@mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "iv.instrumental_variable")])
def test_refutation_data_subset_refuter_continuous(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.continuous_treatment_testsuite() # Run both
@mark.advanced
@mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "backdoor.propensity_score_matching")])
def test_refutation_data_subset_refuter_binary(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.binary_treatment_testsuite(tests_to_run="atleast-one-common-cause")
| darthtrevino | ead8d47102f0ac6db51d84432874c331fb84f3cb | 2f2463be424093d720ebfa9cc625a06160a44cfe | Most of the files use "mark.advanced" rather than pytest.mark.advanced. Is this contextual or would it be better to be consistent? or am I being too pedantic :) | emrekiciman | 337 |
py-why/dowhy | 615 | CI: Split tests into groups to speed them up | * Mark some slow tests as advanced
* Split tests into 4 shards in CI | null | 2022-08-25 22:09:18+00:00 | 2022-09-01 22:51:03+00:00 | tests/causal_refuters/test_data_subset_refuter.py | import numpy as np
import pytest
from .base import TestRefuter
@pytest.mark.usefixtures("fixed_seed")
class TestDataSubsetRefuter(object):
@pytest.mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "iv.instrumental_variable")])
def test_refutation_data_subset_refuter_continuous(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.continuous_treatment_testsuite() # Run both
@pytest.mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "backdoor.propensity_score_matching")])
def test_refutation_data_subset_refuter_binary(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.binary_treatment_testsuite(tests_to_run="atleast-one-common-cause")
| import numpy as np
import pytest
from pytest import mark
from .base import TestRefuter
@mark.usefixtures("fixed_seed")
class TestDataSubsetRefuter(object):
@mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "iv.instrumental_variable")])
def test_refutation_data_subset_refuter_continuous(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.continuous_treatment_testsuite() # Run both
@mark.advanced
@mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "backdoor.propensity_score_matching")])
def test_refutation_data_subset_refuter_binary(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.binary_treatment_testsuite(tests_to_run="atleast-one-common-cause")
| darthtrevino | ead8d47102f0ac6db51d84432874c331fb84f3cb | 2f2463be424093d720ebfa9cc625a06160a44cfe | I've found both in the codebase - basically which one you use depends on whether you have `import pytest` or `from pytest import mark` in your file. I'm fine standardizing on using `@mark.advanced` though. | darthtrevino | 338 |
py-why/dowhy | 615 | CI: Split tests into groups to speed them up | * Mark some slow tests as advanced
* Split tests into 4 shards in CI | null | 2022-08-25 22:09:18+00:00 | 2022-09-01 22:51:03+00:00 | tests/causal_refuters/test_data_subset_refuter.py | import numpy as np
import pytest
from .base import TestRefuter
@pytest.mark.usefixtures("fixed_seed")
class TestDataSubsetRefuter(object):
@pytest.mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "iv.instrumental_variable")])
def test_refutation_data_subset_refuter_continuous(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.continuous_treatment_testsuite() # Run both
@pytest.mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "backdoor.propensity_score_matching")])
def test_refutation_data_subset_refuter_binary(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.binary_treatment_testsuite(tests_to_run="atleast-one-common-cause")
| import numpy as np
import pytest
from pytest import mark
from .base import TestRefuter
@mark.usefixtures("fixed_seed")
class TestDataSubsetRefuter(object):
@mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "iv.instrumental_variable")])
def test_refutation_data_subset_refuter_continuous(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.continuous_treatment_testsuite() # Run both
@mark.advanced
@mark.parametrize(["error_tolerance", "estimator_method"], [(0.01, "backdoor.propensity_score_matching")])
def test_refutation_data_subset_refuter_binary(self, error_tolerance, estimator_method):
refuter_tester = TestRefuter(error_tolerance, estimator_method, "data_subset_refuter")
refuter_tester.binary_treatment_testsuite(tests_to_run="atleast-one-common-cause")
| darthtrevino | ead8d47102f0ac6db51d84432874c331fb84f3cb | 2f2463be424093d720ebfa9cc625a06160a44cfe | Ok, let's standardize on mark.advanced then. | emrekiciman | 339 |
py-why/dowhy | 602 | Add dependency to resolve security alert | https://github.com/py-why/dowhy/security/dependabot/1
* Updates nbconvert to 7.0rc3 & mistune to a safe version
* Move documentation-generation dependencies into devDependencies area
* Run `poetry update` for dependency refresh | null | 2022-08-13 01:40:09+00:00 | 2022-08-19 21:01:53+00:00 | poetry.lock | [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "main"
optional = true
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.7"
description = "Annotate AST trees with source code positions"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "main"
optional = true
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.2"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.13.1"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "<0.8"
joblib = ">=0.13.0"
lightgbm = "*"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0,<1.2"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.41.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "protobuf (<4)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["protobuf (<4)", "keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.9.1"
description = "Get the currently executing AST node of a frame, and other information"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "main"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.34.4"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "7.7.1"
description = "IPython HTML widgets for Jupyter"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = {version = ">=4.0.0", markers = "python_version >= \"3.3\""}
ipython-genutils = ">=0.2.0,<0.3.0"
jupyterlab-widgets = {version = ">=1.0.0", markers = "python_version >= \"3.6\""}
traitlets = ">=4.3.1"
widgetsnbextension = ">=3.6.0,<3.7.0"
[package.extras]
test = ["pytest (>=3.6.0)", "pytest-cov", "mock"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.9.1"
description = "An implementation of JSON Schema validation for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "1.1.1"
description = "A JupyterLab extension."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "lxml"
version = "4.9.1"
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, != 3.4.*"
[package.extras]
source = ["Cython (>=0.29.7)"]
htmlsoup = ["beautifulsoup4"]
html5 = ["html5lib"]
cssselect = ["cssselect (>=0.7)"]
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.3"
description = "Inline Matplotlib backend for Jupyter"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "0.8.4"
description = "The fastest markdown parser in pure Python"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "main"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "6.5.3"
description = "Converting Jupyter Notebooks"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
entrypoints = ">=0.2.2"
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
lxml = "*"
MarkupSafe = ">=2.0"
mistune = ">=0.8.1,<2"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
webpdf = ["pyppeteer (>=1,<1.1)"]
test = ["pyppeteer (>=1,<1.1)", "pre-commit", "ipywidgets (>=7)", "ipykernel", "pytest-dependency", "pytest-cov", "pytest"]
serve = ["tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
all = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)", "pyppeteer (>=1,<1.1)", "pre-commit", "ipywidgets (>=7)", "ipykernel", "pytest-dependency", "pytest-cov", "pytest"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.1"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "main"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
ssh = ["paramiko"]
docs = ["sphinx-rtd-theme (>=0.5.0)", "Sphinx (>=3.0.0)"]
dev = ["pytest-timeout", "pytest-mock", "pytest-cov", "pytest", "psutil", "paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "main"
optional = true
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "main"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.12.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "pygraphviz"
version = "1.9"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "main"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.2"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.40.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
packaging = ">20.9"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["catboost", "nbsphinx", "matplotlib", "sphinx", "pytest-cov", "pyod", "torch", "pytest", "sentencepiece", "xgboost", "opencv-python", "transformers", "lime", "pyspark", "lightgbm", "pytest-mpl", "numpydoc", "ipython", "sphinx-rtd-theme"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "main"
optional = true
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "main"
optional = true
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "main"
optional = true
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "main"
optional = true
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "main"
optional = true
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
lint = ["flake8", "mypy", "docutils-stubs"]
test = ["pytest"]
[[package]]
name = "stack-data"
version = "0.3.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "main"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "main"
optional = true
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "3.6.1"
description = "IPython HTML widgets for Jupyter"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
notebook = ">=4.4.1"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
docs = ["rpy2", "sphinx-multiversion", "sphinxcontrib-googleanalytics", "nbsphinx", "sphinx-rtd-theme", "pydata-sphinx-theme", "sympy", "ipykernel"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "ec60e963ce9001e7d159e5e7306e48dd1192521dba9752e7ae1ff8685f204b5b"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.7-py2.py3-none-any.whl", hash = "sha256:f5589ef8518f73dd82c15e1c19f795d8a62c133485e557c04443d4a1a730cf9f"},
{file = "asttokens-2.0.7.tar.gz", hash = "sha256:8444353e4e2a99661c8dfb85ec9c02eedded08f0006234bff7db44a06840acc2"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.2-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:77a47d596ce8c69673d5f0c9876a80cb5a6cbc964f3b31b2d44683c7c01b6634"},
{file = "debugpy-1.6.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:726e5cc0ed5bc63e821dc371d88ddae5cba85e2ad207bf5fefc808b29421cb4c"},
{file = "debugpy-1.6.2-cp310-cp310-win32.whl", hash = "sha256:9809bd1cdc0026fab711e280e0cb5d8f89ae5f4f74701aba5bda9a20a6afb567"},
{file = "debugpy-1.6.2-cp310-cp310-win_amd64.whl", hash = "sha256:40741d4bbf59baca1e97a5123514afcc036423caae5f24db23a865c0b4167c34"},
{file = "debugpy-1.6.2-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:67749e972213c395647a8798cc8377646e581e1fe97d0b1b7607e6b112ae4511"},
{file = "debugpy-1.6.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4e3c43d650a1e5fa7110af380fb59061bcba1e7348c00237e7473c55ae499b96"},
{file = "debugpy-1.6.2-cp37-cp37m-win32.whl", hash = "sha256:9e572c2ac3dd93f3f1a038a9226e7cc0d7326b8d345c9b9ce6fbf9cb9822e314"},
{file = "debugpy-1.6.2-cp37-cp37m-win_amd64.whl", hash = "sha256:ac5d9e625d291a041ff3eaf65bdb816eb79a5b204cf9f1ffaf9617c0eadf96fa"},
{file = "debugpy-1.6.2-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:9f72435bc9a2026a35a41221beff853dd4b6b17567ba9b9d349ee9512eb71ce6"},
{file = "debugpy-1.6.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:aaf579de5ecd02634d601d7cf5b6baae5f5bab89a55ef78e0904d766ef477729"},
{file = "debugpy-1.6.2-cp38-cp38-win32.whl", hash = "sha256:0984086a670f46c75b5046b39a55f34e4120bee78928ac4c3c7f1c7b8be1d8be"},
{file = "debugpy-1.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:19337bb8ff87da2535ac00ea3877ceaf40ff3c681421d1a96ab4d67dad031a16"},
{file = "debugpy-1.6.2-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:163f282287ce68b00a51e9dcd7ad461ef288d740dcb3a2f22c01c62f31b62696"},
{file = "debugpy-1.6.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4909bb2f8e5c8fe33d6ec5b7764100b494289252ebe94ec7838b30467435f1cb"},
{file = "debugpy-1.6.2-cp39-cp39-win32.whl", hash = "sha256:3b4657d3cd20aa454b62a70040524d3e785efc9a8488d16cd0e6caeb7b2a3f07"},
{file = "debugpy-1.6.2-cp39-cp39-win_amd64.whl", hash = "sha256:79d9ac34542b830a7954ab111ad8a4c790f1f836b895d03223aea4216b739208"},
{file = "debugpy-1.6.2-py2.py3-none-any.whl", hash = "sha256:0bfdcf261f97a603d7ef7ab6972cdf7136201fde93d19bf3f917d0d2e43a5694"},
{file = "debugpy-1.6.2.zip", hash = "sha256:e6047272e97a11aa6898138c1c88c8cf61838deeb2a4f0a74e63bb567f8dafc6"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.13.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:53f85030480858a5d325e5b7ab638775faad281a16fba639b337aeaa49629a95"},
{file = "econml-0.13.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8567287c7349ba671d94d8a37c271095a9109c90a1c6e94fa03fbcda0c0d3554"},
{file = "econml-0.13.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:37816ffa16154678dce09a9a1d40b24ac85d689d496fbe122a9274645516821f"},
{file = "econml-0.13.1-cp36-cp36m-win32.whl", hash = "sha256:075ad0e5e5db7ffc504263f0c8853fff6cd95973f9cfb01ef674aaca8cdcba68"},
{file = "econml-0.13.1-cp36-cp36m-win_amd64.whl", hash = "sha256:022682d1d10e0fc4b33eed52c5149397cf49a2325c03482dae1eff4494767870"},
{file = "econml-0.13.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:dcaf25cb1fd515a4ab26c1820240604a0d01f7fc3e40cbf325077c0351252292"},
{file = "econml-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2f173b95d1c92d69f2fbe69f23de436deae3cb6462e34ad84bb7746bdcd90e0"},
{file = "econml-0.13.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb1d9f313c909e5cf3da7861dbc62dccf2be80128a2fb81ce4966dc01bf41946"},
{file = "econml-0.13.1-cp37-cp37m-win32.whl", hash = "sha256:3d632e65e70f14364acadfc6882a8cf0ecc2227cf5a8e6e007aee5961bfff7a7"},
{file = "econml-0.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:e154b07c3b34aa2ffee35caa6ab79f5a57f762ee4ce2d496b294391c4304c245"},
{file = "econml-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:16d29c53eb6591b3eabb4603d7e72ab25f4bd4274b0fb78916327742bae81081"},
{file = "econml-0.13.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a4abaecd178bbfd3db1ed0820c14b1c4cb5053bdc3382c23a2d194d059f29412"},
{file = "econml-0.13.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0cd016c2d8cd2e77440efbc27f49d3a42aa3e1795bdf7db80909a5b4c65497a7"},
{file = "econml-0.13.1-cp38-cp38-win32.whl", hash = "sha256:83b3d59a03be978d35f9f82d92de2d62773877298f414e72ab435e4dbb5d939a"},
{file = "econml-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:03d7a1db756c3ec9a3913f18575401660d433bf415af8107c1a160d859e216bd"},
{file = "econml-0.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ac367fa415d94496b643d003fffc5aa079eebbea566020d88f85fcae23b0234f"},
{file = "econml-0.13.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3aa8d8cc8dadbce7dc6fba4d8d17cc46cd6cdd2da8ade7c9f0ebfab491ee9dd"},
{file = "econml-0.13.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b01ba564050e5973ba309f0127289a1cf06274d2f294df80245efb95c55d620e"},
{file = "econml-0.13.1-cp39-cp39-win32.whl", hash = "sha256:cb0cb22ecbfbdd75edfab1a8539173b69a322a270c8c53e574fd50ec68784b0f"},
{file = "econml-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:38a09d3bfde8c450212b18a4636af2a64685d1a0f8d76c8cfe0830437a289eb6"},
{file = "econml-0.13.1.tar.gz", hash = "sha256:9060e54f46657a62c67e26a6755feb0531106b24e7444fa4c86b8139c89cf9b9"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.9.1-py2.py3-none-any.whl", hash = "sha256:4ce4d6082d99361c0231fc31ac1a0f56979363cc6819de0b1410784f99e49105"},
{file = "executing-0.9.1.tar.gz", hash = "sha256:ea278e2cf90cbbacd24f1080dd1f0ac25b71b2e21f50ab439b7ba45dd3195587"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.34.4-py3-none-any.whl", hash = "sha256:d73f25b283cd8033367451122aa868a23de0734757a01984e4b30b18b9050c72"},
{file = "fonttools-4.34.4.zip", hash = "sha256:9a1c52488045cd6c6491fd07711a380f932466e317cb8e016fc4e99dc7eac2f0"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-7.7.1-py2.py3-none-any.whl", hash = "sha256:aa1076ab7102b2486ae2607c43c243200a07c17d6093676c419d4b6762489a50"},
{file = "ipywidgets-7.7.1.tar.gz", hash = "sha256:5f2fa1b7afae1af32c88088c9828ad978de93ddda393d7ed414e553fee93dcab"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.9.1-py3-none-any.whl", hash = "sha256:8ebad55894c002585271af2d327d99339ef566fb085d9129b69e2623867c4106"},
{file = "jsonschema-4.9.1.tar.gz", hash = "sha256:408c4c8ed0dede3b268f7a441784f74206380b04f93eb2d537c7befb3df3099f"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-1.1.1-py3-none-any.whl", hash = "sha256:90ab47d99da03a3697074acb23b2975ead1d6171aa41cb2812041a7f2a08177a"},
{file = "jupyterlab_widgets-1.1.1.tar.gz", hash = "sha256:67d0ef1e407e0c42c8ab60b9d901cd7a4c68923650763f75bf17fb06c1943b79"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
lxml = [
{file = "lxml-4.9.1-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:98cafc618614d72b02185ac583c6f7796202062c41d2eeecdf07820bad3295ed"},
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c62e8dd9754b7debda0c5ba59d34509c4688f853588d75b53c3791983faa96fc"},
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:21fb3d24ab430fc538a96e9fbb9b150029914805d551deeac7d7822f64631dfc"},
{file = "lxml-4.9.1-cp27-cp27m-win32.whl", hash = "sha256:86e92728ef3fc842c50a5cb1d5ba2bc66db7da08a7af53fb3da79e202d1b2cd3"},
{file = "lxml-4.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4cfbe42c686f33944e12f45a27d25a492cc0e43e1dc1da5d6a87cbcaf2e95627"},
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dad7b164905d3e534883281c050180afcf1e230c3d4a54e8038aa5cfcf312b84"},
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a614e4afed58c14254e67862456d212c4dcceebab2eaa44d627c2ca04bf86837"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:f9ced82717c7ec65a67667bb05865ffe38af0e835cdd78728f1209c8fffe0cad"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d9fc0bf3ff86c17348dfc5d322f627d78273eba545db865c3cd14b3f19e57fa5"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:e5f66bdf0976ec667fc4594d2812a00b07ed14d1b44259d19a41ae3fff99f2b8"},
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:fe17d10b97fdf58155f858606bddb4e037b805a60ae023c009f760d8361a4eb8"},
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8caf4d16b31961e964c62194ea3e26a0e9561cdf72eecb1781458b67ec83423d"},
{file = "lxml-4.9.1-cp310-cp310-win32.whl", hash = "sha256:4780677767dd52b99f0af1f123bc2c22873d30b474aa0e2fc3fe5e02217687c7"},
{file = "lxml-4.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:b122a188cd292c4d2fcd78d04f863b789ef43aa129b233d7c9004de08693728b"},
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:be9eb06489bc975c38706902cbc6888f39e946b81383abc2838d186f0e8b6a9d"},
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f1be258c4d3dc609e654a1dc59d37b17d7fef05df912c01fc2e15eb43a9735f3"},
{file = "lxml-4.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:927a9dd016d6033bc12e0bf5dee1dde140235fc8d0d51099353c76081c03dc29"},
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9232b09f5efee6a495a99ae6824881940d6447debe272ea400c02e3b68aad85d"},
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:04da965dfebb5dac2619cb90fcf93efdb35b3c6994fea58a157a834f2f94b318"},
{file = "lxml-4.9.1-cp35-cp35m-win32.whl", hash = "sha256:4d5bae0a37af799207140652a700f21a85946f107a199bcb06720b13a4f1f0b7"},
{file = "lxml-4.9.1-cp35-cp35m-win_amd64.whl", hash = "sha256:4878e667ebabe9b65e785ac8da4d48886fe81193a84bbe49f12acff8f7a383a4"},
{file = "lxml-4.9.1-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:1355755b62c28950f9ce123c7a41460ed9743c699905cbe664a5bcc5c9c7c7fb"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:bcaa1c495ce623966d9fc8a187da80082334236a2a1c7e141763ffaf7a405067"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6eafc048ea3f1b3c136c71a86db393be36b5b3d9c87b1c25204e7d397cee9536"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:13c90064b224e10c14dcdf8086688d3f0e612db53766e7478d7754703295c7c8"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206a51077773c6c5d2ce1991327cda719063a47adc02bd703c56a662cdb6c58b"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e8f0c9d65da595cfe91713bc1222af9ecabd37971762cb830dea2fc3b3bb2acf"},
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:8f0a4d179c9a941eb80c3a63cdb495e539e064f8054230844dcf2fcb812b71d3"},
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:830c88747dce8a3e7525defa68afd742b4580df6aa2fdd6f0855481e3994d391"},
{file = "lxml-4.9.1-cp36-cp36m-win32.whl", hash = "sha256:1e1cf47774373777936c5aabad489fef7b1c087dcd1f426b621fda9dcc12994e"},
{file = "lxml-4.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:5974895115737a74a00b321e339b9c3f45c20275d226398ae79ac008d908bff7"},
{file = "lxml-4.9.1-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:1423631e3d51008871299525b541413c9b6c6423593e89f9c4cfbe8460afc0a2"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:2aaf6a0a6465d39b5ca69688fce82d20088c1838534982996ec46633dc7ad6cc"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:9f36de4cd0c262dd9927886cc2305aa3f2210db437aa4fed3fb4940b8bf4592c"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ae06c1e4bc60ee076292e582a7512f304abdf6c70db59b56745cca1684f875a4"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:57e4d637258703d14171b54203fd6822fda218c6c2658a7d30816b10995f29f3"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d279033bf614953c3fc4a0aa9ac33a21e8044ca72d4fa8b9273fe75359d5cca"},
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a60f90bba4c37962cbf210f0188ecca87daafdf60271f4c6948606e4dabf8785"},
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:6ca2264f341dd81e41f3fffecec6e446aa2121e0b8d026fb5130e02de1402785"},
{file = "lxml-4.9.1-cp37-cp37m-win32.whl", hash = "sha256:27e590352c76156f50f538dbcebd1925317a0f70540f7dc8c97d2931c595783a"},
{file = "lxml-4.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:eea5d6443b093e1545ad0210e6cf27f920482bfcf5c77cdc8596aec73523bb7e"},
{file = "lxml-4.9.1-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:f05251bbc2145349b8d0b77c0d4e5f3b228418807b1ee27cefb11f69ed3d233b"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:487c8e61d7acc50b8be82bda8c8d21d20e133c3cbf41bd8ad7eb1aaeb3f07c97"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:8d1a92d8e90b286d491e5626af53afef2ba04da33e82e30744795c71880eaa21"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:b570da8cd0012f4af9fa76a5635cd31f707473e65a5a335b186069d5c7121ff2"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ef87fca280fb15342726bd5f980f6faf8b84a5287fcc2d4962ea8af88b35130"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:93e414e3206779ef41e5ff2448067213febf260ba747fc65389a3ddaa3fb8715"},
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6653071f4f9bac46fbc30f3c7838b0e9063ee335908c5d61fb7a4a86c8fd2036"},
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:32a73c53783becdb7eaf75a2a1525ea8e49379fb7248c3eeefb9412123536387"},
{file = "lxml-4.9.1-cp38-cp38-win32.whl", hash = "sha256:1a7c59c6ffd6ef5db362b798f350e24ab2cfa5700d53ac6681918f314a4d3b94"},
{file = "lxml-4.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:1436cf0063bba7888e43f1ba8d58824f085410ea2025befe81150aceb123e345"},
{file = "lxml-4.9.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4beea0f31491bc086991b97517b9683e5cfb369205dac0148ef685ac12a20a67"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:41fb58868b816c202e8881fd0f179a4644ce6e7cbbb248ef0283a34b73ec73bb"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:bd34f6d1810d9354dc7e35158aa6cc33456be7706df4420819af6ed966e85448"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:edffbe3c510d8f4bf8640e02ca019e48a9b72357318383ca60e3330c23aaffc7"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6d949f53ad4fc7cf02c44d6678e7ff05ec5f5552b235b9e136bd52e9bf730b91"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:079b68f197c796e42aa80b1f739f058dcee796dc725cc9a1be0cdb08fc45b000"},
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9c3a88d20e4fe4a2a4a84bf439a5ac9c9aba400b85244c63a1ab7088f85d9d25"},
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4e285b5f2bf321fc0857b491b5028c5f276ec0c873b985d58d7748ece1d770dd"},
{file = "lxml-4.9.1-cp39-cp39-win32.whl", hash = "sha256:ef72013e20dd5ba86a8ae1aed7f56f31d3374189aa8b433e7b12ad182c0d2dfb"},
{file = "lxml-4.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:10d2017f9150248563bb579cd0d07c61c58da85c922b780060dcc9a3aa9f432d"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0538747a9d7827ce3e16a8fdd201a99e661c7dee3c96c885d8ecba3c35d1032c"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0645e934e940107e2fdbe7c5b6fb8ec6232444260752598bc4d09511bd056c0b"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6daa662aba22ef3258934105be2dd9afa5bb45748f4f702a3b39a5bf53a1f4dc"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-macosx_10_15_x86_64.whl", hash = "sha256:603a464c2e67d8a546ddaa206d98e3246e5db05594b97db844c2f0a1af37cf5b"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c4b2e0559b68455c085fb0f6178e9752c4be3bba104d6e881eb5573b399d1eb2"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0f3f0059891d3254c7b5fb935330d6db38d6519ecd238ca4fce93c234b4a0f73"},
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c852b1530083a620cb0de5f3cd6826f19862bafeaf77586f1aef326e49d95f0c"},
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:287605bede6bd36e930577c5925fcea17cb30453d96a7b4c63c14a257118dbb9"},
{file = "lxml-4.9.1.tar.gz", hash = "sha256:fe749b052bb7233fe5d072fcb549221a8cb1a16725c47c37e42b0b9cb3ff2c3f"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.3.tar.gz", hash = "sha256:a04bfba22e0d1395479f866853ec1ee28eea1485c1d69a6faf00dc3e24ff34ee"},
{file = "matplotlib_inline-0.1.3-py3-none-any.whl", hash = "sha256:aed605ba3b72462d64d475a21a9296f400a19c4f74a31b59103d2a99ffd5aa5c"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-0.8.4-py2.py3-none-any.whl", hash = "sha256:88a1051873018da288eee8538d476dffe1262495144b33ecb586c4ab266bb8d4"},
{file = "mistune-0.8.4.tar.gz", hash = "sha256:59a3429db53c50b5c6bcc8a07f8848cb00d7dc8bdb431a4ab41920d201d4756e"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-6.5.3-py3-none-any.whl", hash = "sha256:2564bb5125d862949f72475de0c0348392add7ea62cc950985347bfe7bbc2034"},
{file = "nbconvert-6.5.3.tar.gz", hash = "sha256:10ed693c4cfd3c63583c87ca5c3a2f6ed874145103595f3824efcc8dfcb7522c"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b15c3f1ed08df4980e02cc79ee058b788a3d0bef2fb3c9ca90bb8cbd5b8a3a04"},
{file = "numpy-1.23.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9ce242162015b7e88092dccd0e854548c0926b75c7924a3495e02c6067aba1f5"},
{file = "numpy-1.23.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0d7447679ae9a7124385ccf0ea990bb85bb869cef217e2ea6c844b6a6855073"},
{file = "numpy-1.23.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3119daed207e9410eaf57dcf9591fdc68045f60483d94956bee0bfdcba790953"},
{file = "numpy-1.23.1-cp310-cp310-win32.whl", hash = "sha256:3ab67966c8d45d55a2bdf40701536af6443763907086c0a6d1232688e27e5447"},
{file = "numpy-1.23.1-cp310-cp310-win_amd64.whl", hash = "sha256:1865fdf51446839ca3fffaab172461f2b781163f6f395f1aed256b1ddc253622"},
{file = "numpy-1.23.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aeba539285dcf0a1ba755945865ec61240ede5432df41d6e29fab305f4384db2"},
{file = "numpy-1.23.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7e8229f3687cdadba2c4faef39204feb51ef7c1a9b669247d49a24f3e2e1617c"},
{file = "numpy-1.23.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68b69f52e6545af010b76516f5daaef6173e73353e3295c5cb9f96c35d755641"},
{file = "numpy-1.23.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1408c3527a74a0209c781ac82bde2182b0f0bf54dea6e6a363fe0cc4488a7ce7"},
{file = "numpy-1.23.1-cp38-cp38-win32.whl", hash = "sha256:47f10ab202fe4d8495ff484b5561c65dd59177949ca07975663f4494f7269e3e"},
{file = "numpy-1.23.1-cp38-cp38-win_amd64.whl", hash = "sha256:37e5ebebb0eb54c5b4a9b04e6f3018e16b8ef257d26c8945925ba8105008e645"},
{file = "numpy-1.23.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:173f28921b15d341afadf6c3898a34f20a0569e4ad5435297ba262ee8941e77b"},
{file = "numpy-1.23.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:876f60de09734fbcb4e27a97c9a286b51284df1326b1ac5f1bf0ad3678236b22"},
{file = "numpy-1.23.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35590b9c33c0f1c9732b3231bb6a72d1e4f77872390c47d50a615686ae7ed3fd"},
{file = "numpy-1.23.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a35c4e64dfca659fe4d0f1421fc0f05b8ed1ca8c46fb73d9e5a7f175f85696bb"},
{file = "numpy-1.23.1-cp39-cp39-win32.whl", hash = "sha256:c2f91f88230042a130ceb1b496932aa717dcbd665350beb821534c5c7e15881c"},
{file = "numpy-1.23.1-cp39-cp39-win_amd64.whl", hash = "sha256:37ece2bd095e9781a7156852e43d18044fd0d742934833335599c583618181b9"},
{file = "numpy-1.23.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:8002574a6b46ac3b5739a003b5233376aeac5163e5dcd43dd7ad062f3e186129"},
{file = "numpy-1.23.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d732d17b8a9061540a10fda5bfeabca5785700ab5469a5e9b93aca5e2d3a5fb"},
{file = "numpy-1.23.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:55df0f7483b822855af67e38fb3a526e787adf189383b4934305565d71c4b148"},
{file = "numpy-1.23.1.tar.gz", hash = "sha256:d748ef349bfef2e1194b59da37ed5a29c19ea8d7e6342019921ba2ba4fd8b624"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.12.0-py3-none-any.whl", hash = "sha256:dc9c10fb40944260f6ed4c688ece0cd2048414940f1cea51b8b226318411c519"},
{file = "Pygments-2.12.0.tar.gz", hash = "sha256:5eb116118f9612ff1ee89ac96437bb6b49e8f04d8a13b514ba26f620208e26eb"},
]
pygraphviz = [
{file = "pygraphviz-1.9.zip", hash = "sha256:fa18f7c6cea28341a4e466ed0cf05682b0a68288afe8dd7c9426782f7c1ae01c"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2-py2.py3-none-any.whl", hash = "sha256:d9b245e63af49c4e51afdec5402f56b99c0cb483a84a12bb8b7db980386baade"},
{file = "pytz-2022.2.tar.gz", hash = "sha256:bc824559e43e8ab983426a49525079d186b25372ff63aa3430ccd527d95edc3a"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.2-py3-none-any.whl", hash = "sha256:755dc845b6ad76dcbcbc07ea3da75ae54bb1ea529eb72d15f83d26499a5df319"},
{file = "SecretStorage-3.3.2.tar.gz", hash = "sha256:0a8eb9645b320881c222e827c26f4cfcf55363e8b374a021981ef886657a912f"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.40.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8bb8b4c01bd33592412dae5246286f62efbb24ad774b63e59b8b16969b915b6d"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d2844acab55e18bcb3d691237a720301223a38805e6e43752e6717f3a8b2cc28"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e7dd3040b0ec91bc9f477a354973d231d3a6beebe2fa7a5c6a565a79ba7746e8"},
{file = "shap-0.40.0-cp36-cp36m-win32.whl", hash = "sha256:86ea1466244c7e0d0c5dd91d26a90e0b645f5c9d7066810462a921263463529b"},
{file = "shap-0.40.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bbf0cfa30cd8c51f8830d3f25c3881b9949e062124cd0d0b3d8efdc7e0cf5136"},
{file = "shap-0.40.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3c5ace8bd5222b455fa5650f9043146e19d80d701f95b25c4c5fb81f628547"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:18b4ca36a43409b784dc76810f76aaa504c467eac17fa89ef5ee330cb460b2b7"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:dbb1ec9b2c05c3939425529437c5f3cfba7a3929fed0e820fb84a42e82358cdd"},
{file = "shap-0.40.0-cp37-cp37m-win32.whl", hash = "sha256:0d12f7d86481afd000d5f144c10cadb31d52fb1f77f68659472d6f6d89f7843b"},
{file = "shap-0.40.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dbd07e48fc7f4d5916f6cdd9dbb8d29b7711a265cc9beac92e7d4a4d9e738bc7"},
{file = "shap-0.40.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:399325caecc7306eb7de17ac19aa797abbf2fcda47d2bb4588d9492adb2dce65"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4ec50bd0aa24efe1add177371b8b62080484efb87c6dbcf321895c5a08cf68d6"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:e2b5f2d3cac82de0c49afde6529bebb6d5b20334325640267bf25dce572175a1"},
{file = "shap-0.40.0-cp38-cp38-win32.whl", hash = "sha256:ba06256568747aaab9ad0091306550bfe826c1f195bf2cf57b405ae1de16faed"},
{file = "shap-0.40.0-cp38-cp38-win_amd64.whl", hash = "sha256:fb1b325a55fdf58061d332ed3308d44162084d4cb5f53f2c7774ce943d60b0ad"},
{file = "shap-0.40.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f282fa12ca6fc594bcadca389309d733f73fe071e29ab49cb6e51beaa8b01a1a"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2e72a47407f010f845b3ed6cb4f5160f0907ec8ab97df2bca164ebcb263b4205"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:649c905f9a4629839142e1769235989fb61730eb789a70d27ec7593eb02186a7"},
{file = "shap-0.40.0-cp39-cp39-win32.whl", hash = "sha256:5c220632ba57426d450dcc8ca43c55f657fe18e18f5d223d2a4e2aa02d905047"},
{file = "shap-0.40.0-cp39-cp39-win_amd64.whl", hash = "sha256:46e7084ce021eea450306bf7434adaead53921fd32504f04d1804569839e2979"},
{file = "shap-0.40.0.tar.gz", hash = "sha256:add0a27bb4eb57f0a363c2c4265b1a1328a8c15b01c14c7d432d9cc387dd8579"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.3.0-py3-none-any.whl", hash = "sha256:aa1d52d14d09c7a9a12bb740e6bdfffe0f5e8f4f9218d85e7c73a8c37f7ae38d"},
{file = "stack_data-0.3.0.tar.gz", hash = "sha256:77bec1402dcd0987e9022326473fdbcc767304892a533ed8c29888dacb7dddbc"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-3.6.1-py2.py3-none-any.whl", hash = "sha256:954e0faefdd414e4e013f17dbc7fd86f24cf1d243a3ac85d5f0fc2c2d2b50c66"},
{file = "widgetsnbextension-3.6.1.tar.gz", hash = "sha256:9c84ae64c2893c7cbe2eaafc7505221a795c27d68938454034ac487319a75b10"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.8"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.12.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "*"
joblib = ">=0.13.0"
lightgbm = "*"
numba = "!=0.42.1"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.40.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.10.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.36.0"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.1"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.13.0"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format-nongpl = ["webcolors (>=1.11)", "uri-template", "rfc3986-validator (>0.1.0)", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
format = ["webcolors (>=1.11)", "uri-template", "rfc3987", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.2"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.2"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
dev = ["paramiko", "psutil", "pytest", "pytest-cov", "pytest-mock", "pytest-timeout"]
docs = ["Sphinx (>=3.0.0)", "sphinx-rtd-theme (>=0.5.0)"]
ssh = ["paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.9"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2.1"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "dev"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.3"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.39.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["pyod", "pytest", "sphinx-rtd-theme", "sentencepiece", "transformers", "lightgbm", "torch", "numpydoc", "nbsphinx", "opencv-python", "ipython", "pytest-mpl", "matplotlib", "xgboost", "catboost", "sphinx", "pytest-cov", "pyspark", "lime"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "stack-data"
version = "0.4.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "dev"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "4.0.2"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "6ad71879b47478d1d68f2b9c421c2ef8ffb6c027a78df9eebca593b26f5e1046"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.8-py2.py3-none-any.whl", hash = "sha256:e3305297c744ae53ffa032c45dc347286165e4ffce6875dc662b205db0623d86"},
{file = "asttokens-2.0.8.tar.gz", hash = "sha256:c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.12.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c8c6d2c34f98eb506d4ac48f705f6d959ccdfdeada9a0cc555afcba3233a4d0a"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c9493076c801110ebd9e15a7d162e768d33445367272a8b4f88c70898efa7a8d"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5c5c278771bf405ddf0790fc72149af9d05aec095e00c5ad47e08ab9e8da47d9"},
{file = "econml-0.12.0-cp36-cp36m-win32.whl", hash = "sha256:0477610ae12923a98bc23bdff4f9f5be12c692704d5ec9ebea8f5f4c3010b9ff"},
{file = "econml-0.12.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2cea261af4315d3ec4cb6a91ab6b4aa664b3d3c6f8d235c8778351fb259d69c2"},
{file = "econml-0.12.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3962e0f497c58724269ebf74a295dd8e2f2494e6cff00f3fc582b13b73d9dda8"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:70ce227be0b91fd463d7a48ab4b58aa562793efa779f74254a38a03c42bf6d9e"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:80f78f25b259991872e66665ca64cf108fc405b6d6cf20ebbff30fae392b3258"},
{file = "econml-0.12.0-cp37-cp37m-win32.whl", hash = "sha256:1e780e2e4dbabcff65c2ed488b8751203a6f7bfe0c9c8fd12a0a930b273dbafe"},
{file = "econml-0.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7248c62daa5b8662e1368fe3926a9a7e5ab88c1ee29ae492b7362a9124151071"},
{file = "econml-0.12.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aacdb2c96ea2377f7c4f14fbf3fb6ded5edfccf17decc706b5147564129938b4"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f87a21636872e32e627093711c65d1333be590f0aef0fa2477717442ada57dfb"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1763951ca85b3f34072d0f44e31cebb18e91a281fc94212db9b08124f0efe206"},
{file = "econml-0.12.0-cp38-cp38-win32.whl", hash = "sha256:2e4339e0dcbbc280ac95c4ac446026853f77cd22de9a4c7421b933e717ce8626"},
{file = "econml-0.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:473377685874bf9725a4c6fdd571a858d942abd8a2f4664050da426322d322c7"},
{file = "econml-0.12.0.tar.gz", hash = "sha256:4a8196d6c785e9e0e42b2fa35dd058eb37166c75dcdb90ada953feea6fbb854b"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
{file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.36.0-py3-none-any.whl", hash = "sha256:cb91ef8d5a435d90aeb3ab814b2548c6b515df5bc13b4c5adaa23778f2f79823"},
{file = "fonttools-4.36.0.zip", hash = "sha256:e637d2fe06bddabbfc488e02ef32d04d561e3c71e9ba11abc7782ea753ceb218"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.1-py3-none-any.whl", hash = "sha256:fc0744df3a964ecfd68a6d2debe547fe89db252b8d7bb3db5740aba72edb0e6c"},
{file = "ipywidgets-8.0.1.tar.gz", hash = "sha256:1a296094203309e834f2781a275214d255ac5d266bbfa602f9f6915e1806614c"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.13.0-py3-none-any.whl", hash = "sha256:870a61bb45050b81103faf6a4be00a0a906e06636ffcf0b84f5a2e51faf901ff"},
{file = "jsonschema-4.13.0.tar.gz", hash = "sha256:3776512df4f53f74e6e28fe35717b5b223c1756875486984a31bc9165e7fc920"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.2-py3-none-any.whl", hash = "sha256:98303a281f4004670cdcea2ef4aecba19c580adc297664c593f967025625c8c5"},
{file = "jupyterlab_widgets-3.0.2.tar.gz", hash = "sha256:47ab54cd165aa0cb3bcef1232d77471580cd2c36bbe2153fc5ba31e26ad87320"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e603ca1fb47b913942f3e660a15e55a9ebca906857edfea476ae5f0fe9b457d5"},
{file = "numpy-1.23.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:633679a472934b1c20a12ed0c9a6c9eb167fbb4cb89031939bfd03dd9dbc62b8"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e5226674f6ea79e14e3b91bfbc153fdf3ac13f5cc54ee7bc8fdbe820a32da0"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc02c0235b261925102b1bd586579b7158e9d0d07ecb61148a1799214a4afd5"},
{file = "numpy-1.23.2-cp310-cp310-win32.whl", hash = "sha256:df28dda02c9328e122661f399f7655cdcbcf22ea42daa3650a26bce08a187450"},
{file = "numpy-1.23.2-cp310-cp310-win_amd64.whl", hash = "sha256:8ebf7e194b89bc66b78475bd3624d92980fca4e5bb86dda08d677d786fefc414"},
{file = "numpy-1.23.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dc76bca1ca98f4b122114435f83f1fcf3c0fe48e4e6f660e07996abf2f53903c"},
{file = "numpy-1.23.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ecfdd68d334a6b97472ed032b5b37a30d8217c097acfff15e8452c710e775524"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5593f67e66dea4e237f5af998d31a43e447786b2154ba1ad833676c788f37cde"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac987b35df8c2a2eab495ee206658117e9ce867acf3ccb376a19e83070e69418"},
{file = "numpy-1.23.2-cp311-cp311-win32.whl", hash = "sha256:d98addfd3c8728ee8b2c49126f3c44c703e2b005d4a95998e2167af176a9e722"},
{file = "numpy-1.23.2-cp311-cp311-win_amd64.whl", hash = "sha256:8ecb818231afe5f0f568c81f12ce50f2b828ff2b27487520d85eb44c71313b9e"},
{file = "numpy-1.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:909c56c4d4341ec8315291a105169d8aae732cfb4c250fbc375a1efb7a844f8f"},
{file = "numpy-1.23.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8247f01c4721479e482cc2f9f7d973f3f47810cbc8c65e38fd1bbd3141cc9842"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8b97a8a87cadcd3f94659b4ef6ec056261fa1e1c3317f4193ac231d4df70215"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd5b7ccae24e3d8501ee5563e82febc1771e73bd268eef82a1e8d2b4d556ae66"},
{file = "numpy-1.23.2-cp38-cp38-win32.whl", hash = "sha256:9b83d48e464f393d46e8dd8171687394d39bc5abfe2978896b77dc2604e8635d"},
{file = "numpy-1.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:dec198619b7dbd6db58603cd256e092bcadef22a796f778bf87f8592b468441d"},
{file = "numpy-1.23.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f41f5bf20d9a521f8cab3a34557cd77b6f205ab2116651f12959714494268b0"},
{file = "numpy-1.23.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:806cc25d5c43e240db709875e947076b2826f47c2c340a5a2f36da5bb10c58d6"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f9d84a24889ebb4c641a9b99e54adb8cab50972f0166a3abc14c3b93163f074"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c403c81bb8ffb1c993d0165a11493fd4bf1353d258f6997b3ee288b0a48fce77"},
{file = "numpy-1.23.2-cp39-cp39-win32.whl", hash = "sha256:cf8c6aed12a935abf2e290860af8e77b26a042eb7f2582ff83dc7ed5f963340c"},
{file = "numpy-1.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:5e28cd64624dc2354a349152599e55308eb6ca95a13ce6a7d5679ebff2962913"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:806970e69106556d1dd200e26647e9bee5e2b3f1814f9da104a943e8d548ca38"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd879d3ca4b6f39b7770829f73278b7c5e248c91d538aab1e506c628353e47f"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:be6b350dfbc7f708d9d853663772a9310783ea58f6035eec649fb9c4371b5389"},
{file = "numpy-1.23.2.tar.gz", hash = "sha256:b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.9.zip", hash = "sha256:fa18f7c6cea28341a4e466ed0cf05682b0a68288afe8dd7c9426782f7c1ae01c"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2.1-py2.py3-none-any.whl", hash = "sha256:220f481bdafa09c3955dfbdddb7b57780e9a94f5127e35456a48589b9e0c0197"},
{file = "pytz-2022.2.1.tar.gz", hash = "sha256:cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
{file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.39.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bf9af9b089ef95cb1ac0df80a43f8144aa9095d10f282cb5c19643ff88a6a79d"},
{file = "shap-0.39.0-cp37-cp37m-win_amd64.whl", hash = "sha256:b44f9fbb7349f5406b98b4ec24c672f8fe932606bb7574a8aae2238410c55289"},
{file = "shap-0.39.0-cp38-cp38-win_amd64.whl", hash = "sha256:c0d51b44c15eae1c12e51ed498f898cfc5e12d6be7e0d4f733ce6453f6ec85a4"},
{file = "shap-0.39.0.tar.gz", hash = "sha256:0196a6c12cc98f8b48ce9c5968550902432b80290da6fa7be8655441a1c6251a"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.4.0-py3-none-any.whl", hash = "sha256:b94fed36d725cfabc6d09ed5886913e35eed9009766a1af1d5941b9da3a94aaa"},
{file = "stack_data-0.4.0.tar.gz", hash = "sha256:a90ae7e260f7d15aefeceb46f0a028d4ccb9eb8856475c53e341945342d41ea7"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.2-py3-none-any.whl", hash = "sha256:966bd61443926b6adcc0abef9f499c48bdeda181c333b0f49842d7385d440579"},
{file = "widgetsnbextension-4.0.2.tar.gz", hash = "sha256:07f0e8582f920b24316cef16490f1aeb498f2c875d48980540e5c5dbf0ff5e2d"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| darthtrevino | f947d98ffbc9d96b91be9bcfc43cf142fc2c0cd5 | eadb99cbe104d89dda62f73e331515df484e8d5a | Are all the other changes in this file a consequence of updating mistune? | emrekiciman | 340 |
py-why/dowhy | 602 | Add dependency to resolve security alert | https://github.com/py-why/dowhy/security/dependabot/1
* Updates nbconvert to 7.0rc3 & mistune to a safe version
* Move documentation-generation dependencies into devDependencies area
* Run `poetry update` for dependency refresh | null | 2022-08-13 01:40:09+00:00 | 2022-08-19 21:01:53+00:00 | poetry.lock | [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "main"
optional = true
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.7"
description = "Annotate AST trees with source code positions"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "main"
optional = true
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.2"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.13.1"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "<0.8"
joblib = ">=0.13.0"
lightgbm = "*"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0,<1.2"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.41.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "protobuf (<4)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["protobuf (<4)", "keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.9.1"
description = "Get the currently executing AST node of a frame, and other information"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "main"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.34.4"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "7.7.1"
description = "IPython HTML widgets for Jupyter"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = {version = ">=4.0.0", markers = "python_version >= \"3.3\""}
ipython-genutils = ">=0.2.0,<0.3.0"
jupyterlab-widgets = {version = ">=1.0.0", markers = "python_version >= \"3.6\""}
traitlets = ">=4.3.1"
widgetsnbextension = ">=3.6.0,<3.7.0"
[package.extras]
test = ["pytest (>=3.6.0)", "pytest-cov", "mock"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.9.1"
description = "An implementation of JSON Schema validation for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "1.1.1"
description = "A JupyterLab extension."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "lxml"
version = "4.9.1"
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, != 3.4.*"
[package.extras]
source = ["Cython (>=0.29.7)"]
htmlsoup = ["beautifulsoup4"]
html5 = ["html5lib"]
cssselect = ["cssselect (>=0.7)"]
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.3"
description = "Inline Matplotlib backend for Jupyter"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "0.8.4"
description = "The fastest markdown parser in pure Python"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "main"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "6.5.3"
description = "Converting Jupyter Notebooks"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
entrypoints = ">=0.2.2"
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
lxml = "*"
MarkupSafe = ">=2.0"
mistune = ">=0.8.1,<2"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
webpdf = ["pyppeteer (>=1,<1.1)"]
test = ["pyppeteer (>=1,<1.1)", "pre-commit", "ipywidgets (>=7)", "ipykernel", "pytest-dependency", "pytest-cov", "pytest"]
serve = ["tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
all = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)", "pyppeteer (>=1,<1.1)", "pre-commit", "ipywidgets (>=7)", "ipykernel", "pytest-dependency", "pytest-cov", "pytest"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.1"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "main"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
ssh = ["paramiko"]
docs = ["sphinx-rtd-theme (>=0.5.0)", "Sphinx (>=3.0.0)"]
dev = ["pytest-timeout", "pytest-mock", "pytest-cov", "pytest", "psutil", "paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "main"
optional = true
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "main"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.12.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "pygraphviz"
version = "1.9"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "main"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.2"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.40.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
packaging = ">20.9"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["catboost", "nbsphinx", "matplotlib", "sphinx", "pytest-cov", "pyod", "torch", "pytest", "sentencepiece", "xgboost", "opencv-python", "transformers", "lime", "pyspark", "lightgbm", "pytest-mpl", "numpydoc", "ipython", "sphinx-rtd-theme"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "main"
optional = true
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "main"
optional = true
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "main"
optional = true
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "main"
optional = true
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "main"
optional = true
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "main"
optional = true
python-versions = ">=3.5"
[package.extras]
lint = ["flake8", "mypy", "docutils-stubs"]
test = ["pytest"]
[[package]]
name = "stack-data"
version = "0.3.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "main"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "main"
optional = true
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "3.6.1"
description = "IPython HTML widgets for Jupyter"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
notebook = ">=4.4.1"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
docs = ["rpy2", "sphinx-multiversion", "sphinxcontrib-googleanalytics", "nbsphinx", "sphinx-rtd-theme", "pydata-sphinx-theme", "sympy", "ipykernel"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "ec60e963ce9001e7d159e5e7306e48dd1192521dba9752e7ae1ff8685f204b5b"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.7-py2.py3-none-any.whl", hash = "sha256:f5589ef8518f73dd82c15e1c19f795d8a62c133485e557c04443d4a1a730cf9f"},
{file = "asttokens-2.0.7.tar.gz", hash = "sha256:8444353e4e2a99661c8dfb85ec9c02eedded08f0006234bff7db44a06840acc2"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.2-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:77a47d596ce8c69673d5f0c9876a80cb5a6cbc964f3b31b2d44683c7c01b6634"},
{file = "debugpy-1.6.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:726e5cc0ed5bc63e821dc371d88ddae5cba85e2ad207bf5fefc808b29421cb4c"},
{file = "debugpy-1.6.2-cp310-cp310-win32.whl", hash = "sha256:9809bd1cdc0026fab711e280e0cb5d8f89ae5f4f74701aba5bda9a20a6afb567"},
{file = "debugpy-1.6.2-cp310-cp310-win_amd64.whl", hash = "sha256:40741d4bbf59baca1e97a5123514afcc036423caae5f24db23a865c0b4167c34"},
{file = "debugpy-1.6.2-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:67749e972213c395647a8798cc8377646e581e1fe97d0b1b7607e6b112ae4511"},
{file = "debugpy-1.6.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4e3c43d650a1e5fa7110af380fb59061bcba1e7348c00237e7473c55ae499b96"},
{file = "debugpy-1.6.2-cp37-cp37m-win32.whl", hash = "sha256:9e572c2ac3dd93f3f1a038a9226e7cc0d7326b8d345c9b9ce6fbf9cb9822e314"},
{file = "debugpy-1.6.2-cp37-cp37m-win_amd64.whl", hash = "sha256:ac5d9e625d291a041ff3eaf65bdb816eb79a5b204cf9f1ffaf9617c0eadf96fa"},
{file = "debugpy-1.6.2-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:9f72435bc9a2026a35a41221beff853dd4b6b17567ba9b9d349ee9512eb71ce6"},
{file = "debugpy-1.6.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:aaf579de5ecd02634d601d7cf5b6baae5f5bab89a55ef78e0904d766ef477729"},
{file = "debugpy-1.6.2-cp38-cp38-win32.whl", hash = "sha256:0984086a670f46c75b5046b39a55f34e4120bee78928ac4c3c7f1c7b8be1d8be"},
{file = "debugpy-1.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:19337bb8ff87da2535ac00ea3877ceaf40ff3c681421d1a96ab4d67dad031a16"},
{file = "debugpy-1.6.2-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:163f282287ce68b00a51e9dcd7ad461ef288d740dcb3a2f22c01c62f31b62696"},
{file = "debugpy-1.6.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4909bb2f8e5c8fe33d6ec5b7764100b494289252ebe94ec7838b30467435f1cb"},
{file = "debugpy-1.6.2-cp39-cp39-win32.whl", hash = "sha256:3b4657d3cd20aa454b62a70040524d3e785efc9a8488d16cd0e6caeb7b2a3f07"},
{file = "debugpy-1.6.2-cp39-cp39-win_amd64.whl", hash = "sha256:79d9ac34542b830a7954ab111ad8a4c790f1f836b895d03223aea4216b739208"},
{file = "debugpy-1.6.2-py2.py3-none-any.whl", hash = "sha256:0bfdcf261f97a603d7ef7ab6972cdf7136201fde93d19bf3f917d0d2e43a5694"},
{file = "debugpy-1.6.2.zip", hash = "sha256:e6047272e97a11aa6898138c1c88c8cf61838deeb2a4f0a74e63bb567f8dafc6"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.13.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:53f85030480858a5d325e5b7ab638775faad281a16fba639b337aeaa49629a95"},
{file = "econml-0.13.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8567287c7349ba671d94d8a37c271095a9109c90a1c6e94fa03fbcda0c0d3554"},
{file = "econml-0.13.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:37816ffa16154678dce09a9a1d40b24ac85d689d496fbe122a9274645516821f"},
{file = "econml-0.13.1-cp36-cp36m-win32.whl", hash = "sha256:075ad0e5e5db7ffc504263f0c8853fff6cd95973f9cfb01ef674aaca8cdcba68"},
{file = "econml-0.13.1-cp36-cp36m-win_amd64.whl", hash = "sha256:022682d1d10e0fc4b33eed52c5149397cf49a2325c03482dae1eff4494767870"},
{file = "econml-0.13.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:dcaf25cb1fd515a4ab26c1820240604a0d01f7fc3e40cbf325077c0351252292"},
{file = "econml-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2f173b95d1c92d69f2fbe69f23de436deae3cb6462e34ad84bb7746bdcd90e0"},
{file = "econml-0.13.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb1d9f313c909e5cf3da7861dbc62dccf2be80128a2fb81ce4966dc01bf41946"},
{file = "econml-0.13.1-cp37-cp37m-win32.whl", hash = "sha256:3d632e65e70f14364acadfc6882a8cf0ecc2227cf5a8e6e007aee5961bfff7a7"},
{file = "econml-0.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:e154b07c3b34aa2ffee35caa6ab79f5a57f762ee4ce2d496b294391c4304c245"},
{file = "econml-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:16d29c53eb6591b3eabb4603d7e72ab25f4bd4274b0fb78916327742bae81081"},
{file = "econml-0.13.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a4abaecd178bbfd3db1ed0820c14b1c4cb5053bdc3382c23a2d194d059f29412"},
{file = "econml-0.13.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0cd016c2d8cd2e77440efbc27f49d3a42aa3e1795bdf7db80909a5b4c65497a7"},
{file = "econml-0.13.1-cp38-cp38-win32.whl", hash = "sha256:83b3d59a03be978d35f9f82d92de2d62773877298f414e72ab435e4dbb5d939a"},
{file = "econml-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:03d7a1db756c3ec9a3913f18575401660d433bf415af8107c1a160d859e216bd"},
{file = "econml-0.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ac367fa415d94496b643d003fffc5aa079eebbea566020d88f85fcae23b0234f"},
{file = "econml-0.13.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3aa8d8cc8dadbce7dc6fba4d8d17cc46cd6cdd2da8ade7c9f0ebfab491ee9dd"},
{file = "econml-0.13.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b01ba564050e5973ba309f0127289a1cf06274d2f294df80245efb95c55d620e"},
{file = "econml-0.13.1-cp39-cp39-win32.whl", hash = "sha256:cb0cb22ecbfbdd75edfab1a8539173b69a322a270c8c53e574fd50ec68784b0f"},
{file = "econml-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:38a09d3bfde8c450212b18a4636af2a64685d1a0f8d76c8cfe0830437a289eb6"},
{file = "econml-0.13.1.tar.gz", hash = "sha256:9060e54f46657a62c67e26a6755feb0531106b24e7444fa4c86b8139c89cf9b9"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.9.1-py2.py3-none-any.whl", hash = "sha256:4ce4d6082d99361c0231fc31ac1a0f56979363cc6819de0b1410784f99e49105"},
{file = "executing-0.9.1.tar.gz", hash = "sha256:ea278e2cf90cbbacd24f1080dd1f0ac25b71b2e21f50ab439b7ba45dd3195587"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.34.4-py3-none-any.whl", hash = "sha256:d73f25b283cd8033367451122aa868a23de0734757a01984e4b30b18b9050c72"},
{file = "fonttools-4.34.4.zip", hash = "sha256:9a1c52488045cd6c6491fd07711a380f932466e317cb8e016fc4e99dc7eac2f0"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-7.7.1-py2.py3-none-any.whl", hash = "sha256:aa1076ab7102b2486ae2607c43c243200a07c17d6093676c419d4b6762489a50"},
{file = "ipywidgets-7.7.1.tar.gz", hash = "sha256:5f2fa1b7afae1af32c88088c9828ad978de93ddda393d7ed414e553fee93dcab"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.9.1-py3-none-any.whl", hash = "sha256:8ebad55894c002585271af2d327d99339ef566fb085d9129b69e2623867c4106"},
{file = "jsonschema-4.9.1.tar.gz", hash = "sha256:408c4c8ed0dede3b268f7a441784f74206380b04f93eb2d537c7befb3df3099f"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-1.1.1-py3-none-any.whl", hash = "sha256:90ab47d99da03a3697074acb23b2975ead1d6171aa41cb2812041a7f2a08177a"},
{file = "jupyterlab_widgets-1.1.1.tar.gz", hash = "sha256:67d0ef1e407e0c42c8ab60b9d901cd7a4c68923650763f75bf17fb06c1943b79"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
lxml = [
{file = "lxml-4.9.1-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:98cafc618614d72b02185ac583c6f7796202062c41d2eeecdf07820bad3295ed"},
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c62e8dd9754b7debda0c5ba59d34509c4688f853588d75b53c3791983faa96fc"},
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:21fb3d24ab430fc538a96e9fbb9b150029914805d551deeac7d7822f64631dfc"},
{file = "lxml-4.9.1-cp27-cp27m-win32.whl", hash = "sha256:86e92728ef3fc842c50a5cb1d5ba2bc66db7da08a7af53fb3da79e202d1b2cd3"},
{file = "lxml-4.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4cfbe42c686f33944e12f45a27d25a492cc0e43e1dc1da5d6a87cbcaf2e95627"},
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dad7b164905d3e534883281c050180afcf1e230c3d4a54e8038aa5cfcf312b84"},
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a614e4afed58c14254e67862456d212c4dcceebab2eaa44d627c2ca04bf86837"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:f9ced82717c7ec65a67667bb05865ffe38af0e835cdd78728f1209c8fffe0cad"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d9fc0bf3ff86c17348dfc5d322f627d78273eba545db865c3cd14b3f19e57fa5"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:e5f66bdf0976ec667fc4594d2812a00b07ed14d1b44259d19a41ae3fff99f2b8"},
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:fe17d10b97fdf58155f858606bddb4e037b805a60ae023c009f760d8361a4eb8"},
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8caf4d16b31961e964c62194ea3e26a0e9561cdf72eecb1781458b67ec83423d"},
{file = "lxml-4.9.1-cp310-cp310-win32.whl", hash = "sha256:4780677767dd52b99f0af1f123bc2c22873d30b474aa0e2fc3fe5e02217687c7"},
{file = "lxml-4.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:b122a188cd292c4d2fcd78d04f863b789ef43aa129b233d7c9004de08693728b"},
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:be9eb06489bc975c38706902cbc6888f39e946b81383abc2838d186f0e8b6a9d"},
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f1be258c4d3dc609e654a1dc59d37b17d7fef05df912c01fc2e15eb43a9735f3"},
{file = "lxml-4.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:927a9dd016d6033bc12e0bf5dee1dde140235fc8d0d51099353c76081c03dc29"},
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9232b09f5efee6a495a99ae6824881940d6447debe272ea400c02e3b68aad85d"},
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:04da965dfebb5dac2619cb90fcf93efdb35b3c6994fea58a157a834f2f94b318"},
{file = "lxml-4.9.1-cp35-cp35m-win32.whl", hash = "sha256:4d5bae0a37af799207140652a700f21a85946f107a199bcb06720b13a4f1f0b7"},
{file = "lxml-4.9.1-cp35-cp35m-win_amd64.whl", hash = "sha256:4878e667ebabe9b65e785ac8da4d48886fe81193a84bbe49f12acff8f7a383a4"},
{file = "lxml-4.9.1-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:1355755b62c28950f9ce123c7a41460ed9743c699905cbe664a5bcc5c9c7c7fb"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:bcaa1c495ce623966d9fc8a187da80082334236a2a1c7e141763ffaf7a405067"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6eafc048ea3f1b3c136c71a86db393be36b5b3d9c87b1c25204e7d397cee9536"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:13c90064b224e10c14dcdf8086688d3f0e612db53766e7478d7754703295c7c8"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206a51077773c6c5d2ce1991327cda719063a47adc02bd703c56a662cdb6c58b"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e8f0c9d65da595cfe91713bc1222af9ecabd37971762cb830dea2fc3b3bb2acf"},
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:8f0a4d179c9a941eb80c3a63cdb495e539e064f8054230844dcf2fcb812b71d3"},
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:830c88747dce8a3e7525defa68afd742b4580df6aa2fdd6f0855481e3994d391"},
{file = "lxml-4.9.1-cp36-cp36m-win32.whl", hash = "sha256:1e1cf47774373777936c5aabad489fef7b1c087dcd1f426b621fda9dcc12994e"},
{file = "lxml-4.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:5974895115737a74a00b321e339b9c3f45c20275d226398ae79ac008d908bff7"},
{file = "lxml-4.9.1-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:1423631e3d51008871299525b541413c9b6c6423593e89f9c4cfbe8460afc0a2"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:2aaf6a0a6465d39b5ca69688fce82d20088c1838534982996ec46633dc7ad6cc"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:9f36de4cd0c262dd9927886cc2305aa3f2210db437aa4fed3fb4940b8bf4592c"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ae06c1e4bc60ee076292e582a7512f304abdf6c70db59b56745cca1684f875a4"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:57e4d637258703d14171b54203fd6822fda218c6c2658a7d30816b10995f29f3"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d279033bf614953c3fc4a0aa9ac33a21e8044ca72d4fa8b9273fe75359d5cca"},
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a60f90bba4c37962cbf210f0188ecca87daafdf60271f4c6948606e4dabf8785"},
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:6ca2264f341dd81e41f3fffecec6e446aa2121e0b8d026fb5130e02de1402785"},
{file = "lxml-4.9.1-cp37-cp37m-win32.whl", hash = "sha256:27e590352c76156f50f538dbcebd1925317a0f70540f7dc8c97d2931c595783a"},
{file = "lxml-4.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:eea5d6443b093e1545ad0210e6cf27f920482bfcf5c77cdc8596aec73523bb7e"},
{file = "lxml-4.9.1-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:f05251bbc2145349b8d0b77c0d4e5f3b228418807b1ee27cefb11f69ed3d233b"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:487c8e61d7acc50b8be82bda8c8d21d20e133c3cbf41bd8ad7eb1aaeb3f07c97"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:8d1a92d8e90b286d491e5626af53afef2ba04da33e82e30744795c71880eaa21"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:b570da8cd0012f4af9fa76a5635cd31f707473e65a5a335b186069d5c7121ff2"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ef87fca280fb15342726bd5f980f6faf8b84a5287fcc2d4962ea8af88b35130"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:93e414e3206779ef41e5ff2448067213febf260ba747fc65389a3ddaa3fb8715"},
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6653071f4f9bac46fbc30f3c7838b0e9063ee335908c5d61fb7a4a86c8fd2036"},
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:32a73c53783becdb7eaf75a2a1525ea8e49379fb7248c3eeefb9412123536387"},
{file = "lxml-4.9.1-cp38-cp38-win32.whl", hash = "sha256:1a7c59c6ffd6ef5db362b798f350e24ab2cfa5700d53ac6681918f314a4d3b94"},
{file = "lxml-4.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:1436cf0063bba7888e43f1ba8d58824f085410ea2025befe81150aceb123e345"},
{file = "lxml-4.9.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4beea0f31491bc086991b97517b9683e5cfb369205dac0148ef685ac12a20a67"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:41fb58868b816c202e8881fd0f179a4644ce6e7cbbb248ef0283a34b73ec73bb"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:bd34f6d1810d9354dc7e35158aa6cc33456be7706df4420819af6ed966e85448"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:edffbe3c510d8f4bf8640e02ca019e48a9b72357318383ca60e3330c23aaffc7"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6d949f53ad4fc7cf02c44d6678e7ff05ec5f5552b235b9e136bd52e9bf730b91"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:079b68f197c796e42aa80b1f739f058dcee796dc725cc9a1be0cdb08fc45b000"},
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9c3a88d20e4fe4a2a4a84bf439a5ac9c9aba400b85244c63a1ab7088f85d9d25"},
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4e285b5f2bf321fc0857b491b5028c5f276ec0c873b985d58d7748ece1d770dd"},
{file = "lxml-4.9.1-cp39-cp39-win32.whl", hash = "sha256:ef72013e20dd5ba86a8ae1aed7f56f31d3374189aa8b433e7b12ad182c0d2dfb"},
{file = "lxml-4.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:10d2017f9150248563bb579cd0d07c61c58da85c922b780060dcc9a3aa9f432d"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0538747a9d7827ce3e16a8fdd201a99e661c7dee3c96c885d8ecba3c35d1032c"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0645e934e940107e2fdbe7c5b6fb8ec6232444260752598bc4d09511bd056c0b"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6daa662aba22ef3258934105be2dd9afa5bb45748f4f702a3b39a5bf53a1f4dc"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-macosx_10_15_x86_64.whl", hash = "sha256:603a464c2e67d8a546ddaa206d98e3246e5db05594b97db844c2f0a1af37cf5b"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c4b2e0559b68455c085fb0f6178e9752c4be3bba104d6e881eb5573b399d1eb2"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0f3f0059891d3254c7b5fb935330d6db38d6519ecd238ca4fce93c234b4a0f73"},
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c852b1530083a620cb0de5f3cd6826f19862bafeaf77586f1aef326e49d95f0c"},
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:287605bede6bd36e930577c5925fcea17cb30453d96a7b4c63c14a257118dbb9"},
{file = "lxml-4.9.1.tar.gz", hash = "sha256:fe749b052bb7233fe5d072fcb549221a8cb1a16725c47c37e42b0b9cb3ff2c3f"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.3.tar.gz", hash = "sha256:a04bfba22e0d1395479f866853ec1ee28eea1485c1d69a6faf00dc3e24ff34ee"},
{file = "matplotlib_inline-0.1.3-py3-none-any.whl", hash = "sha256:aed605ba3b72462d64d475a21a9296f400a19c4f74a31b59103d2a99ffd5aa5c"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-0.8.4-py2.py3-none-any.whl", hash = "sha256:88a1051873018da288eee8538d476dffe1262495144b33ecb586c4ab266bb8d4"},
{file = "mistune-0.8.4.tar.gz", hash = "sha256:59a3429db53c50b5c6bcc8a07f8848cb00d7dc8bdb431a4ab41920d201d4756e"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-6.5.3-py3-none-any.whl", hash = "sha256:2564bb5125d862949f72475de0c0348392add7ea62cc950985347bfe7bbc2034"},
{file = "nbconvert-6.5.3.tar.gz", hash = "sha256:10ed693c4cfd3c63583c87ca5c3a2f6ed874145103595f3824efcc8dfcb7522c"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b15c3f1ed08df4980e02cc79ee058b788a3d0bef2fb3c9ca90bb8cbd5b8a3a04"},
{file = "numpy-1.23.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9ce242162015b7e88092dccd0e854548c0926b75c7924a3495e02c6067aba1f5"},
{file = "numpy-1.23.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0d7447679ae9a7124385ccf0ea990bb85bb869cef217e2ea6c844b6a6855073"},
{file = "numpy-1.23.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3119daed207e9410eaf57dcf9591fdc68045f60483d94956bee0bfdcba790953"},
{file = "numpy-1.23.1-cp310-cp310-win32.whl", hash = "sha256:3ab67966c8d45d55a2bdf40701536af6443763907086c0a6d1232688e27e5447"},
{file = "numpy-1.23.1-cp310-cp310-win_amd64.whl", hash = "sha256:1865fdf51446839ca3fffaab172461f2b781163f6f395f1aed256b1ddc253622"},
{file = "numpy-1.23.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aeba539285dcf0a1ba755945865ec61240ede5432df41d6e29fab305f4384db2"},
{file = "numpy-1.23.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7e8229f3687cdadba2c4faef39204feb51ef7c1a9b669247d49a24f3e2e1617c"},
{file = "numpy-1.23.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68b69f52e6545af010b76516f5daaef6173e73353e3295c5cb9f96c35d755641"},
{file = "numpy-1.23.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1408c3527a74a0209c781ac82bde2182b0f0bf54dea6e6a363fe0cc4488a7ce7"},
{file = "numpy-1.23.1-cp38-cp38-win32.whl", hash = "sha256:47f10ab202fe4d8495ff484b5561c65dd59177949ca07975663f4494f7269e3e"},
{file = "numpy-1.23.1-cp38-cp38-win_amd64.whl", hash = "sha256:37e5ebebb0eb54c5b4a9b04e6f3018e16b8ef257d26c8945925ba8105008e645"},
{file = "numpy-1.23.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:173f28921b15d341afadf6c3898a34f20a0569e4ad5435297ba262ee8941e77b"},
{file = "numpy-1.23.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:876f60de09734fbcb4e27a97c9a286b51284df1326b1ac5f1bf0ad3678236b22"},
{file = "numpy-1.23.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35590b9c33c0f1c9732b3231bb6a72d1e4f77872390c47d50a615686ae7ed3fd"},
{file = "numpy-1.23.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a35c4e64dfca659fe4d0f1421fc0f05b8ed1ca8c46fb73d9e5a7f175f85696bb"},
{file = "numpy-1.23.1-cp39-cp39-win32.whl", hash = "sha256:c2f91f88230042a130ceb1b496932aa717dcbd665350beb821534c5c7e15881c"},
{file = "numpy-1.23.1-cp39-cp39-win_amd64.whl", hash = "sha256:37ece2bd095e9781a7156852e43d18044fd0d742934833335599c583618181b9"},
{file = "numpy-1.23.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:8002574a6b46ac3b5739a003b5233376aeac5163e5dcd43dd7ad062f3e186129"},
{file = "numpy-1.23.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d732d17b8a9061540a10fda5bfeabca5785700ab5469a5e9b93aca5e2d3a5fb"},
{file = "numpy-1.23.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:55df0f7483b822855af67e38fb3a526e787adf189383b4934305565d71c4b148"},
{file = "numpy-1.23.1.tar.gz", hash = "sha256:d748ef349bfef2e1194b59da37ed5a29c19ea8d7e6342019921ba2ba4fd8b624"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.12.0-py3-none-any.whl", hash = "sha256:dc9c10fb40944260f6ed4c688ece0cd2048414940f1cea51b8b226318411c519"},
{file = "Pygments-2.12.0.tar.gz", hash = "sha256:5eb116118f9612ff1ee89ac96437bb6b49e8f04d8a13b514ba26f620208e26eb"},
]
pygraphviz = [
{file = "pygraphviz-1.9.zip", hash = "sha256:fa18f7c6cea28341a4e466ed0cf05682b0a68288afe8dd7c9426782f7c1ae01c"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2-py2.py3-none-any.whl", hash = "sha256:d9b245e63af49c4e51afdec5402f56b99c0cb483a84a12bb8b7db980386baade"},
{file = "pytz-2022.2.tar.gz", hash = "sha256:bc824559e43e8ab983426a49525079d186b25372ff63aa3430ccd527d95edc3a"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.2-py3-none-any.whl", hash = "sha256:755dc845b6ad76dcbcbc07ea3da75ae54bb1ea529eb72d15f83d26499a5df319"},
{file = "SecretStorage-3.3.2.tar.gz", hash = "sha256:0a8eb9645b320881c222e827c26f4cfcf55363e8b374a021981ef886657a912f"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.40.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8bb8b4c01bd33592412dae5246286f62efbb24ad774b63e59b8b16969b915b6d"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d2844acab55e18bcb3d691237a720301223a38805e6e43752e6717f3a8b2cc28"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e7dd3040b0ec91bc9f477a354973d231d3a6beebe2fa7a5c6a565a79ba7746e8"},
{file = "shap-0.40.0-cp36-cp36m-win32.whl", hash = "sha256:86ea1466244c7e0d0c5dd91d26a90e0b645f5c9d7066810462a921263463529b"},
{file = "shap-0.40.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bbf0cfa30cd8c51f8830d3f25c3881b9949e062124cd0d0b3d8efdc7e0cf5136"},
{file = "shap-0.40.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3c5ace8bd5222b455fa5650f9043146e19d80d701f95b25c4c5fb81f628547"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:18b4ca36a43409b784dc76810f76aaa504c467eac17fa89ef5ee330cb460b2b7"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:dbb1ec9b2c05c3939425529437c5f3cfba7a3929fed0e820fb84a42e82358cdd"},
{file = "shap-0.40.0-cp37-cp37m-win32.whl", hash = "sha256:0d12f7d86481afd000d5f144c10cadb31d52fb1f77f68659472d6f6d89f7843b"},
{file = "shap-0.40.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dbd07e48fc7f4d5916f6cdd9dbb8d29b7711a265cc9beac92e7d4a4d9e738bc7"},
{file = "shap-0.40.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:399325caecc7306eb7de17ac19aa797abbf2fcda47d2bb4588d9492adb2dce65"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4ec50bd0aa24efe1add177371b8b62080484efb87c6dbcf321895c5a08cf68d6"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:e2b5f2d3cac82de0c49afde6529bebb6d5b20334325640267bf25dce572175a1"},
{file = "shap-0.40.0-cp38-cp38-win32.whl", hash = "sha256:ba06256568747aaab9ad0091306550bfe826c1f195bf2cf57b405ae1de16faed"},
{file = "shap-0.40.0-cp38-cp38-win_amd64.whl", hash = "sha256:fb1b325a55fdf58061d332ed3308d44162084d4cb5f53f2c7774ce943d60b0ad"},
{file = "shap-0.40.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f282fa12ca6fc594bcadca389309d733f73fe071e29ab49cb6e51beaa8b01a1a"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2e72a47407f010f845b3ed6cb4f5160f0907ec8ab97df2bca164ebcb263b4205"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:649c905f9a4629839142e1769235989fb61730eb789a70d27ec7593eb02186a7"},
{file = "shap-0.40.0-cp39-cp39-win32.whl", hash = "sha256:5c220632ba57426d450dcc8ca43c55f657fe18e18f5d223d2a4e2aa02d905047"},
{file = "shap-0.40.0-cp39-cp39-win_amd64.whl", hash = "sha256:46e7084ce021eea450306bf7434adaead53921fd32504f04d1804569839e2979"},
{file = "shap-0.40.0.tar.gz", hash = "sha256:add0a27bb4eb57f0a363c2c4265b1a1328a8c15b01c14c7d432d9cc387dd8579"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.3.0-py3-none-any.whl", hash = "sha256:aa1d52d14d09c7a9a12bb740e6bdfffe0f5e8f4f9218d85e7c73a8c37f7ae38d"},
{file = "stack_data-0.3.0.tar.gz", hash = "sha256:77bec1402dcd0987e9022326473fdbcc767304892a533ed8c29888dacb7dddbc"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-3.6.1-py2.py3-none-any.whl", hash = "sha256:954e0faefdd414e4e013f17dbc7fd86f24cf1d243a3ac85d5f0fc2c2d2b50c66"},
{file = "widgetsnbextension-3.6.1.tar.gz", hash = "sha256:9c84ae64c2893c7cbe2eaafc7505221a795c27d68938454034ac487319a75b10"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| [[package]]
name = "absl-py"
version = "1.2.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["pre-commit", "cogapp", "tomli", "coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "sphinx-notfound-page", "furo"]
docs = ["sphinx", "sphinx-notfound-page", "furo"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["pytest", "cogapp", "pre-commit", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.0.8"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["pytest", "astroid (<=2.5.3)"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]]
name = "atomicwrites"
version = "1.4.1"
description = "Atomic file writes."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
[[package]]
name = "babel"
version = "2.10.3"
description = "Internationalization utilities"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.6.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "causalml"
version = "0.12.3"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = ">=0.22.0"
scipy = ">=1.4.1"
seaborn = "*"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.1.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.5"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "commonmark"
version = "0.9.1"
description = "Python parser for the CommonMark Markdown spec"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = true
python-versions = ">=3.6"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.5.1"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.12.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
dowhy = "*"
joblib = ">=0.13.0"
lightgbm = "*"
numba = "!=0.42.1"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.40.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "keras (<2.4)", "tensorflow (>1.10,<2.3)", "matplotlib"]
automl = ["azure-cli"]
plt = ["graphviz", "matplotlib"]
tf = ["keras (<2.4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "executing"
version = "0.10.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.1"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "jsonschema", "json-spec", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "1.12"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.36.0"
description = "Tools to manipulate font files"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
all = ["fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "zopfli (>=0.1.4)", "lz4 (>=1.7.4.2)", "matplotlib", "sympy", "skia-pathops (>=0.5.0)", "uharfbuzz (>=0.23.0)", "brotlicffi (>=0.8.0)", "scipy", "brotli (>=1.0.1)", "munkres", "unicodedata2 (>=14.0.0)", "xattr"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["scipy", "munkres"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["zopfli (>=0.1.4)", "brotlicffi (>=0.8.0)", "brotli (>=1.0.1)"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.10.0"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
reauth = ["pyu2f (>=0.1.5)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
enterprise_cert = ["pyopenssl (==22.0.0)", "cryptography (==36.0.2)"]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dev = ["tox (>=3)", "flake8", "pep8-naming", "wheel", "twine"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["pytest (>=7)", "pytest-mock (>=3)", "mock (>=4)", "pytest-cov", "coverage"]
[[package]]
name = "grpcio"
version = "1.48.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.48.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "4.12.0"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)"]
perf = ["ipython"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "packaging", "pyfakefs", "flufl.flake8", "pytest-perf (>=0.9.2)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)", "importlib-resources (>=1.3)"]
[[package]]
name = "importlib-resources"
version = "5.9.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.15.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
test = ["flaky", "ipyparallel", "pre-commit", "pytest-cov", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "ipython"
version = "8.4.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "Sphinx (>=1.3)", "ipykernel", "nbconvert", "nbformat", "ipywidgets", "notebook", "ipyparallel", "qtconsole", "pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "numpy (>=1.19)", "pandas", "trio"]
black = ["black"]
doc = ["Sphinx (>=1.3)"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["pytest (<7.1)", "pytest-asyncio", "testpath", "curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.19)", "pandas", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.1"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
[[package]]
name = "jedi"
version = "0.18.1"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jeepney"
version = "0.8.0"
description = "Low-level, pure Python DBus protocol wrapper."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
trio = ["async-generator", "trio"]
test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "joblib"
version = "1.1.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "jsonschema"
version = "4.13.0"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format-nongpl = ["webcolors (>=1.11)", "uri-template", "rfc3986-validator (>0.1.0)", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
format = ["webcolors (>=1.11)", "uri-template", "rfc3987", "rfc3339-validator", "jsonpointer (>1.13)", "isoduration", "idna", "fqdn"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.3.4"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.0"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx-rtd-theme", "sphinx (>=1.3.6)", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.5)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "4.11.1"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.2"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.9.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "keras-preprocessing"
version = "1.1.2"
description = "Easy data preprocessing and data augmentation for deep learning models"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.9.1"
six = ">=1.9.0"
[package.extras]
tests = ["pytest-cov", "pytest-xdist", "pytest", "keras", "tensorflow", "pillow", "pandas"]
pep8 = ["flake8"]
image = ["Pillow (>=5.2.0)", "scipy (>=0.14)"]
[[package]]
name = "keyring"
version = "23.8.2"
description = "Store and access your passwords safely."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
[package.extras]
testing = ["pytest-mypy (>=0.9.1)", "pytest-black (>=0.3.7)", "pytest-enabler (>=1.3)", "pytest-cov", "flake8 (<5)", "pytest-flake8", "pytest-checkdocs (>=2.4)", "pytest (>=6)"]
docs = ["jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "jaraco.packaging (>=9)", "sphinx"]
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.2"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.5.3"
description = "Python plotting package"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.17"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=4,<7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest (>=4.6)"]
develop = ["wheel", "codecov", "pytest-cov", "pycodestyle", "pytest (>=4.6)"]
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclient"
version = "0.6.6"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["autodoc-traits", "mock", "moto", "myst-parser", "Sphinx (>=1.7)", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "ipywidgets (<8.0.0)", "mypy", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx-rtd-theme", "sphinx (>=1.5.1)", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx-rtd-theme", "sphinx (>=1.5.1)"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.4.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "testpath", "pytest", "pre-commit"]
[[package]]
name = "nbsphinx"
version = "0.8.9"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.5"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.5"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["numpy (>=1.19)", "scipy (>=1.8)", "matplotlib (>=3.4)", "pandas (>=1.3)"]
developer = ["pre-commit (>=2.19)", "mypy (>=0.960)"]
doc = ["sphinx (>=5)", "pydata-sphinx-theme (>=0.9)", "sphinx-gallery (>=0.10)", "numpydoc (>=1.4)", "pillow (>=9.1)", "nb2plots (>=0.6)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pygraphviz (>=1.9)", "pydot (>=1.4.2)", "sympy (>=1.10)"]
test = ["pytest (>=7.1)", "pytest-cov (>=3.0)", "codecov (>=2.1)"]
[[package]]
name = "notebook"
version = "6.4.12"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "testpath", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
[[package]]
name = "numpy"
version = "1.23.2"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.0"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["sphinx (==1.2.3)", "sphinxcontrib-napoleon", "sphinx-rtd-theme", "numpydoc"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.4.3"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = [
{version = ">=1.18.5", markers = "platform_machine != \"aarch64\" and platform_machine != \"arm64\" and python_version < \"3.10\""},
{version = ">=1.19.2", markers = "platform_machine == \"aarch64\" and python_version < \"3.10\""},
{version = ">=1.20.0", markers = "platform_machine == \"arm64\" and python_version < \"3.10\""},
]
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandoc"
version = "2.2"
description = "Pandoc Documents for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
plumbum = "*"
ply = "*"
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathspec"
version = "0.9.0"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "patsy"
version = "0.5.2"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["scipy", "pytest-cov", "pytest"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.2.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pkginfo"
version = "1.8.3"
description = "Query metadatdata from sdists / bdists / installed packages."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
testing = ["nose", "coverage"]
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
testing = ["pytest-benchmark", "pytest"]
dev = ["tox", "pre-commit"]
[[package]]
name = "plumbum"
version = "1.7.2"
description = "Plumbum: shell combinators library"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
pywin32 = {version = "*", markers = "platform_system == \"Windows\" and platform_python_implementation != \"PyPy\""}
[package.extras]
dev = ["paramiko", "psutil", "pytest", "pytest-cov", "pytest-mock", "pytest-timeout"]
docs = ["Sphinx (>=3.0.0)", "sphinx-rtd-theme (>=0.5.0)"]
ssh = ["paramiko"]
[[package]]
name = "ply"
version = "3.11"
description = "Python Lex & Yacc"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "poethepoet"
version = "0.16.0"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry_plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "progressbar2"
version = "4.0.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "freezegun (>=0.3.11)", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.14.1"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.30"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.4"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.1"
description = "Cross-platform lib for process and system monitoring in Python."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
dev = ["pydata-sphinx-theme", "nox", "pre-commit", "pyyaml"]
coverage = ["pydata-sphinx-theme", "codecov", "pytest-cov"]
test = ["pydata-sphinx-theme", "pytest"]
doc = ["sphinx-design", "xarray", "numpy", "plotly", "jupyter-sphinx", "sphinx-sitemap", "sphinxext-rediraffe", "pytest-regressions", "pytest", "pandas", "myst-parser", "numpydoc"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.9"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["railroad-diagrams", "jinja2"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
test = ["pytest (>=5.0)", "flake8"]
dev = ["ipython", "sphinx-rtd-theme", "sphinx (>=2.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.1"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "isort (>=5.0)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scipy (>=1.1)", "sphinx", "sphinx-rtd-theme", "yapf"]
extras = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["jupyter (>=1.0.0)", "graphviz (>=0.8)", "matplotlib (>=1.3)", "torchvision (>=0.12.0)", "visdom (>=0.1.4)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn", "wget", "lap", "black (>=21.4b0)", "flake8", "nbval", "pytest (>=5.0)", "pytest-cov", "scipy (>=1.1)"]
[[package]]
name = "pyrsistent"
version = "0.18.1"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.1.2"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.3.3"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "sphinx", "python-utils"]
loguru = ["loguru"]
tests = ["flake8", "pytest", "pytest-cov", "pytest-mypy", "pytest-asyncio", "sphinx", "types-setuptools", "loguru"]
[[package]]
name = "pytz"
version = "2022.2.1"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "304"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywin32-ctypes"
version = "0.2.0"
description = ""
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.7"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyzmq"
version = "23.2.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.3.1"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.2.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest-qt", "pytest-cov (>=3.0.0)", "pytest (>=6,!=7.0.0,!=7.0.1)"]
[[package]]
name = "readme-renderer"
version = "36.0"
description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
bleach = ">=2.1.0"
docutils = ">=0.13.1"
Pygments = ">=2.5.1"
[package.extras]
md = ["cmarkgfm (>=0.8.0)"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "dev"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "requests-toolbelt"
version = "0.9.1"
description = "A utility belt for advanced users of python-requests"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
[[package]]
name = "rfc3986"
version = "2.0.0"
description = "Validating URI References per RFC 3986"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "rich"
version = "12.5.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
python-versions = ">=3.6.3,<4.0.0"
[package.dependencies]
commonmark = ">=0.9.0,<0.10.0"
pygments = ">=2.6.0,<3.0.0"
typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.3"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["numpy", "pytest", "setuptools", "pandas"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
setup = ["setuptools"]
test = ["pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
tests = ["pyamg (>=4.0.0)", "mypy (>=0.770)", "black (>=21.6b0)", "flake8 (>=3.8.2)", "pytest-cov (>=2.9.0)", "pytest (>=5.0.1)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
examples = ["seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
docs = ["sphinxext-opengraph (>=0.4.2)", "sphinx-prompt (>=1.3.0)", "Pillow (>=7.1.2)", "numpydoc (>=1.0.0)", "sphinx-gallery (>=0.7.0)", "sphinx (>=4.0.1)", "memory-profiler (>=0.57.0)", "seaborn (>=0.9.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "matplotlib (>=2.2.3)"]
benchmark = ["memory-profiler (>=0.57.0)", "pandas (>=0.25.0)", "matplotlib (>=2.2.3)"]
[[package]]
name = "scipy"
version = "1.9.0"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.12"
[package.dependencies]
numpy = ">=1.18.5,<1.25.0"
[[package]]
name = "seaborn"
version = "0.11.2"
description = "seaborn: statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
matplotlib = ">=2.2"
numpy = ">=1.15"
pandas = ">=0.23"
scipy = ">=1.0"
[[package]]
name = "secretstorage"
version = "3.3.3"
description = "Python bindings to FreeDesktop.org Secret Service API"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
win32 = ["pywin32"]
objc = ["pyobjc-framework-cocoa"]
nativelib = ["pywin32", "pyobjc-framework-cocoa"]
[[package]]
name = "setuptools-scm"
version = "6.4.2"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = true
python-versions = ">=3.6"
[package.dependencies]
packaging = ">=20.0"
tomli = ">=1.0.0"
[package.extras]
toml = ["setuptools (>=42)"]
test = ["virtualenv (>20)", "pytest (>=6.2)"]
[[package]]
name = "shap"
version = "0.39.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["pyod", "pytest", "sphinx-rtd-theme", "sentencepiece", "transformers", "lightgbm", "torch", "numpydoc", "nbsphinx", "opencv-python", "ipython", "pytest-mpl", "matplotlib", "xgboost", "catboost", "sphinx", "pytest-cov", "pyspark", "lime"]
docs = ["matplotlib", "ipython", "numpydoc", "sphinx-rtd-theme", "sphinx", "nbsphinx"]
others = ["lime"]
plots = ["matplotlib", "ipython"]
test = ["pytest", "pytest-mpl", "pytest-cov", "xgboost", "lightgbm", "catboost", "pyspark", "pyod", "transformers", "torch", "sentencepiece", "opencv-python"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox", "sphinx", "sphinx-rtd-theme"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.1.1"
description = "Python documentation generator"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = "*"
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
Jinja2 = ">=2.3"
packaging = "*"
Pygments = ">=2.0"
requests = ">=2.5.0"
snowballstemmer = ">=1.1"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["flake8 (>=3.5.0)", "flake8-comprehensions", "flake8-bugbear", "isort", "mypy (>=0.971)", "sphinx-lint", "docutils-stubs", "types-typed-ast", "types-requests"]
test = ["pytest (>=4.6)", "html5lib", "cython", "typed-ast"]
[[package]]
name = "sphinx-multiversion"
version = "0.2.4"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
sphinx = ">=2.1"
[package.source]
type = "git"
url = "https://github.com/petergtz/sphinx-multiversion.git"
reference = "override-version-with-refname"
resolved_reference = "cd3a67e7998c5205e2456e7cb1026bd3b9f36253"
[[package]]
name = "sphinx-rtd-theme"
version = "1.0.0"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/petergtz/googleanalytics.git"
reference = "master"
resolved_reference = "f32bcb8c0fb74bf284652b9a5c154337e20058b9"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["html5lib", "pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["mypy", "flake8", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["pytest"]
lint = ["docutils-stubs", "mypy", "flake8"]
[[package]]
name = "stack-data"
version = "0.4.0"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = "*"
executing = "*"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "typeguard", "pytest"]
[[package]]
name = "statsmodels"
version = "0.13.2"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.17"
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = ">=1.3"
[package.extras]
build = ["cython (>=0.29.26)"]
develop = ["cython (>=0.29.26)"]
docs = ["sphinx", "nbconvert", "jupyter-client", "ipykernel", "matplotlib", "nbformat", "numpydoc", "pandas-datareader"]
[[package]]
name = "sympy"
version = "1.10.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tensorboard"
version = "2.9.1"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<3.20"
requests = ">=2.21.0,<3"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.9.1"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=1.12,<2"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.9.0rc0,<2.10.0"
keras-preprocessing = ">=1.1.1"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
six = ">=1.12.0"
tensorboard = ">=2.9,<2.10"
tensorflow-estimator = ">=2.9.0rc0,<2.10.0"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.9.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.26.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.9.0,<2.10.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.9.0,<2.10.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.9.0,<2.10.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.9.0,<2.10.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.9.0,<2.10.0)"]
[[package]]
name = "termcolor"
version = "1.1.0"
description = "ANSII Color formatting for output in terminal."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "terminado"
version = "0.15.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
test = ["pre-commit", "pytest-timeout", "pytest (>=6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.1.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
test = ["coverage", "pytest-isort", "pytest-flake8", "pytest-cov", "pytest"]
doc = ["sphinx-rtd-theme", "sphinx"]
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "dev"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.0"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.3.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pre-commit", "pytest"]
[[package]]
name = "twine"
version = "4.0.1"
description = "Collection of utilities for publishing packages on PyPI"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = ">=3.6"
keyring = ">=15.1"
pkginfo = ">=1.8.1"
readme-renderer = ">=35.0"
requests = ">=2.20"
requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
rfc3986 = ">=1.4.0"
rich = ">=12.0.0"
urllib3 = ">=1.26.0"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.2"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.11"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "widgetsnbextension"
version = "4.0.2"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.6.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "pandas", "distributed"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zipp"
version = "3.8.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite"]
plotting = ["matplotlib"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "6ad71879b47478d1d68f2b9c421c2ef8ffb6c027a78df9eebca593b26f5e1046"
[metadata.files]
absl-py = [
{file = "absl-py-1.2.0.tar.gz", hash = "sha256:f568809938c49abbda89826223c992b630afd23c638160ad7840cfe347710d97"},
{file = "absl_py-1.2.0-py3-none-any.whl", hash = "sha256:5d15f85b8cc859c6245bc9886ba664460ed96a6fee895416caa37d669ee74a9a"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.0.8-py2.py3-none-any.whl", hash = "sha256:e3305297c744ae53ffa032c45dc347286165e4ffce6875dc662b205db0623d86"},
{file = "asttokens-2.0.8.tar.gz", hash = "sha256:c61e16246ecfb2cde2958406b4c8ebc043c9e6d73aaa83c941673b35e5d3a76b"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
atomicwrites = [
{file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
babel = [
{file = "Babel-2.10.3-py3-none-any.whl", hash = "sha256:ff56f4892c1c4bf0d814575ea23471c230d544203c7748e8c68f0089478d48eb"},
{file = "Babel-2.10.3.tar.gz", hash = "sha256:7614553711ee97490f732126dc077f8d0ae084ebc6a96e23db1482afabdb2c51"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f586c26118bc6e714ec58c09df0157fe2d9ee195c764f630eb0d8e7ccce72e69"},
{file = "black-22.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b270a168d69edb8b7ed32c193ef10fd27844e5c60852039599f9184460ce0807"},
{file = "black-22.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6797f58943fceb1c461fb572edbe828d811e719c24e03375fd25170ada53825e"},
{file = "black-22.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c85928b9d5f83b23cee7d0efcb310172412fbf7cb9d9ce963bd67fd141781def"},
{file = "black-22.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6fe02afde060bbeef044af7996f335fbe90b039ccf3f5eb8f16df8b20f77666"},
{file = "black-22.6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cfaf3895a9634e882bf9d2363fed5af8888802d670f58b279b0bece00e9a872d"},
{file = "black-22.6.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94783f636bca89f11eb5d50437e8e17fbc6a929a628d82304c80fa9cd945f256"},
{file = "black-22.6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2ea29072e954a4d55a2ff58971b83365eba5d3d357352a07a7a4df0d95f51c78"},
{file = "black-22.6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e439798f819d49ba1c0bd9664427a05aab79bfba777a6db94fd4e56fae0cb849"},
{file = "black-22.6.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:187d96c5e713f441a5829e77120c269b6514418f4513a390b0499b0987f2ff1c"},
{file = "black-22.6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:074458dc2f6e0d3dab7928d4417bb6957bb834434516f21514138437accdbe90"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a218d7e5856f91d20f04e931b6f16d15356db1c846ee55f01bac297a705ca24f"},
{file = "black-22.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:568ac3c465b1c8b34b61cd7a4e349e93f91abf0f9371eda1cf87194663ab684e"},
{file = "black-22.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6c1734ab264b8f7929cef8ae5f900b85d579e6cbfde09d7387da8f04771b51c6"},
{file = "black-22.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9a3ac16efe9ec7d7381ddebcc022119794872abce99475345c5a61aa18c45ad"},
{file = "black-22.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b9fd45787ba8aa3f5e0a0a98920c1012c884622c6c920dbe98dbd05bc7c70fbf"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7ba9be198ecca5031cd78745780d65a3f75a34b2ff9be5837045dce55db83d1c"},
{file = "black-22.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a3db5b6409b96d9bd543323b23ef32a1a2b06416d525d27e0f67e74f1446c8f2"},
{file = "black-22.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:560558527e52ce8afba936fcce93a7411ab40c7d5fe8c2463e279e843c0328ee"},
{file = "black-22.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b154e6bbde1e79ea3260c4b40c0b7b3109ffcdf7bc4ebf8859169a6af72cd70b"},
{file = "black-22.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:4af5bc0e1f96be5ae9bd7aaec219c901a94d6caa2484c21983d043371c733fc4"},
{file = "black-22.6.0-py3-none-any.whl", hash = "sha256:ac609cf8ef5e7115ddd07d85d988d074ed00e10fbc3445aee393e70164a2219c"},
{file = "black-22.6.0.tar.gz", hash = "sha256:6c6d39e28aed379aec40da1c65434c77d75e65bb59a1e1c283de545fb4e7c6c9"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
causalml = [
{file = "causalml-0.12.3.tar.gz", hash = "sha256:4d85afa0590e4c60818311c2be9a5be64b8c6964b134c77654b738df551454fa"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.0.tar.gz", hash = "sha256:575e708016ff3a5e3681541cb9d79312c416835686d054a23accb873b254f413"},
{file = "charset_normalizer-2.1.0-py3-none-any.whl", hash = "sha256:5189b6f22b01957427f35b6a08d9a0bc45b46d3788ef5a92e978433c7a35f8a5"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.1.0-py3-none-any.whl", hash = "sha256:b5c434f75c34624eedad3a14f2be5ac3b5384774d5b0e3caf905c21479e6c4b1"},
{file = "cloudpickle-2.1.0.tar.gz", hash = "sha256:bb233e876a58491d9590a676f93c7a5473a08f747d5ab9df7f9ce564b3e7938e"},
]
colorama = [
{file = "colorama-0.4.5-py2.py3-none-any.whl", hash = "sha256:854bf444933e37f5824ae7bfc1e98d5bce2ebe4160d46b5edf346a89358e99da"},
{file = "colorama-0.4.5.tar.gz", hash = "sha256:e6c6b4334fc50988a639d9b98aa429a0b57da6e17b9a44f0451f930b6967b7a4"},
]
commonmark = [
{file = "commonmark-0.9.1-py2.py3-none-any.whl", hash = "sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"},
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.5.1-py2.py3-none-any.whl", hash = "sha256:33501d03270bbe410c72639b350e941882a8b0fd55357580fbc873fba0c59302"},
{file = "dill-0.3.5.1.tar.gz", hash = "sha256:d75e41f3eff1eee599d738e76ba8f4ad98ea229db8b085318aa2b3333a208c86"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.12.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c8c6d2c34f98eb506d4ac48f705f6d959ccdfdeada9a0cc555afcba3233a4d0a"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c9493076c801110ebd9e15a7d162e768d33445367272a8b4f88c70898efa7a8d"},
{file = "econml-0.12.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5c5c278771bf405ddf0790fc72149af9d05aec095e00c5ad47e08ab9e8da47d9"},
{file = "econml-0.12.0-cp36-cp36m-win32.whl", hash = "sha256:0477610ae12923a98bc23bdff4f9f5be12c692704d5ec9ebea8f5f4c3010b9ff"},
{file = "econml-0.12.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2cea261af4315d3ec4cb6a91ab6b4aa664b3d3c6f8d235c8778351fb259d69c2"},
{file = "econml-0.12.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3962e0f497c58724269ebf74a295dd8e2f2494e6cff00f3fc582b13b73d9dda8"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:70ce227be0b91fd463d7a48ab4b58aa562793efa779f74254a38a03c42bf6d9e"},
{file = "econml-0.12.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:80f78f25b259991872e66665ca64cf108fc405b6d6cf20ebbff30fae392b3258"},
{file = "econml-0.12.0-cp37-cp37m-win32.whl", hash = "sha256:1e780e2e4dbabcff65c2ed488b8751203a6f7bfe0c9c8fd12a0a930b273dbafe"},
{file = "econml-0.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:7248c62daa5b8662e1368fe3926a9a7e5ab88c1ee29ae492b7362a9124151071"},
{file = "econml-0.12.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:aacdb2c96ea2377f7c4f14fbf3fb6ded5edfccf17decc706b5147564129938b4"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f87a21636872e32e627093711c65d1333be590f0aef0fa2477717442ada57dfb"},
{file = "econml-0.12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1763951ca85b3f34072d0f44e31cebb18e91a281fc94212db9b08124f0efe206"},
{file = "econml-0.12.0-cp38-cp38-win32.whl", hash = "sha256:2e4339e0dcbbc280ac95c4ac446026853f77cd22de9a4c7421b933e717ce8626"},
{file = "econml-0.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:473377685874bf9725a4c6fdd571a858d942abd8a2f4664050da426322d322c7"},
{file = "econml-0.12.0.tar.gz", hash = "sha256:4a8196d6c785e9e0e42b2fa35dd058eb37166c75dcdb90ada953feea6fbb854b"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
executing = [
{file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
{file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.1-py3-none-any.whl", hash = "sha256:2f7158c4de792555753d6c2277d6a2af2d406dfd97aeca21d17173561ede4fe6"},
{file = "fastjsonschema-2.16.1.tar.gz", hash = "sha256:d6fa3ffbe719768d70e298b9fb847484e2bdfdb7241ed052b8d57a9294a8c334"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-1.12-py2.py3-none-any.whl", hash = "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9"},
{file = "flatbuffers-1.12.tar.gz", hash = "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610"},
]
fonttools = [
{file = "fonttools-4.36.0-py3-none-any.whl", hash = "sha256:cb91ef8d5a435d90aeb3ab814b2548c6b515df5bc13b4c5adaa23778f2f79823"},
{file = "fonttools-4.36.0.zip", hash = "sha256:e637d2fe06bddabbfc488e02ef32d04d561e3c71e9ba11abc7782ea753ceb218"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.10.0.tar.gz", hash = "sha256:7904dbd44b745c7323fef29565adee2fe7ff48473e2d94443aced40b0404a395"},
{file = "google_auth-2.10.0-py2.py3-none-any.whl", hash = "sha256:1deba4a54f95ef67b4139eaf5c20eaa7047215eec9f6a2344599b8596db8863b"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.48.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:4a049a032144641ed5d073535c0dc69eb6029187cc729a66946c86dcc8eec3a1"},
{file = "grpcio-1.48.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:f8bc76f5cd95f5476e5285fe5d3704a9332586a569fbbccef551b0b6f7a270f9"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:448d397fe88e9fef8170f019b86abdc4d554ae311aaf4dbff1532fde227d3308"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f9b6b6f7c83869d2316c5d13f953381881a16741275a34ec5ed5762f11b206e"},
{file = "grpcio-1.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5bd8541c4b6b43c9024496d30b4a12346325d3a17a1f3c80ad8924caed1e35c3"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:877d33aeba05ae0b9e81761a694914ed33613f655c35f6bbcf4ebbcb984e0167"},
{file = "grpcio-1.48.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cd01a8201fd8ab2ce496f7e65975da1f1e629eac8eea84ead0fd77e32e4350cd"},
{file = "grpcio-1.48.0-cp310-cp310-win32.whl", hash = "sha256:0388da923dff58ba7f711233e41c2b749b5817b8e0f137a107672d9c15a1009c"},
{file = "grpcio-1.48.0-cp310-cp310-win_amd64.whl", hash = "sha256:8dcffdb8921fd88857ae350fd579277a5f9315351e89ed9094ef28927a46d40d"},
{file = "grpcio-1.48.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:2138c50331232f56178c2b36dcfa6ad67aad705fe410955f3b2a53d722191b89"},
{file = "grpcio-1.48.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:af2d80f142da2a6af45204a5ca2374e2747af07a99de54a1164111e169a761ff"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_aarch64.whl", hash = "sha256:59284bd4cdf47c147c26d91aca693765318d524328f6ece2a1a0b85a12a362af"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc3ebfe356c0c6750379cd194bf2b7e5d1d2f29db1832358f05a73e9290db98c"},
{file = "grpcio-1.48.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc2619a31339e1c53731f54761f1a2cb865d3421f690e00ef3e92f90d2a0c5ae"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7df637405de328a54c1c8c08a3206f974c7a577730f90644af4c3400b7bfde2d"},
{file = "grpcio-1.48.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:9e73b95969a579798bfbeb85d376695cce5172357fb52e450467ceb8e7365152"},
{file = "grpcio-1.48.0-cp36-cp36m-win32.whl", hash = "sha256:059e9d58b5aba7fb9eabe3a4d2ac49e1dcbc2b54b0f166f6475e40b7f4435343"},
{file = "grpcio-1.48.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7cebcf645170f0c82ef71769544f9ac4515993a4d367f5900aba2eb4ecd2a32f"},
{file = "grpcio-1.48.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8af3a8845df35b838104d6fb1ae7f4969d248cf037fa2794916d31e917346f72"},
{file = "grpcio-1.48.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a1ef40975ec9ced6c17ce7fbec9825823da782fa606f0b92392646ff3886f198"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7cccbf6db31f2a78e1909047ff69620f94a4e6e53251858e9502fbbff5714b48"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f3f142579f58def64c0850f0bb0eb1b425ae885f5669dda5b73ade64ad2b753"},
{file = "grpcio-1.48.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:656c6f6f7b815bca3054780b8cdfa1e4e37cd36c887a48558d00c2cf85f31697"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cba4538e8a2ef123ea570e7b1d62162e158963c2471e35d79eb9690c971a10c0"},
{file = "grpcio-1.48.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9daa67820fafceec6194ed1686c1783816e62d6756ff301ba93e682948836846"},
{file = "grpcio-1.48.0-cp37-cp37m-win32.whl", hash = "sha256:7ec264a7fb413e0c804a7a48a6f7d7212742955a60724c44d793da35a8f30873"},
{file = "grpcio-1.48.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a2b1b33b92359388b8164807313dcbb3317101b038a5d54342982560329d958f"},
{file = "grpcio-1.48.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:7b820696a5ce7b98f459f234698cb323f89b355373789188efa126d7f47a2a92"},
{file = "grpcio-1.48.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:e4dfae66ebc165c46c5b7048eb554472ee72fbaab2c2c2da7f9b1621c81e077c"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:f7115038edce33b494e0138b0bd31a2eb6595d45e2eed23be46bc32886feb741"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4e996282238943ca114628255be61980e38b25f73a08ae2ffd02b63eaf70d3a"},
{file = "grpcio-1.48.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13dad31f5155fa555d393511cc8108c41b1b5b54dc4c24c27d4694ddd7a78fad"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c84b9d90b2641963de98b35bb7a2a51f78119fe5bd00ef27246ba9f4f0835e36"},
{file = "grpcio-1.48.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:41b65166779d7dafac4c98380ac19f690f1c5fe18083a71d370df87b24dd30ff"},
{file = "grpcio-1.48.0-cp38-cp38-win32.whl", hash = "sha256:b890e5f5fbc21cb994894f73ecb2faaa66697d8debcb228a5adb0622b9bec3b2"},
{file = "grpcio-1.48.0-cp38-cp38-win_amd64.whl", hash = "sha256:5fe3af539d2f50891ed93aed3064ffbcc38bf848aa3f7ed1fbedcce139c57302"},
{file = "grpcio-1.48.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:a4ed57f4e3d91259551e6765782b22d9e8b8178fec43ebf8e1b2c392c4ced37b"},
{file = "grpcio-1.48.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:60843d8184e171886dd7a93d6672e2ef0b08dfd4f88da7421c10b46b6e031ac4"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:0ecba22f25ccde2442be7e7dd7fa746905d628f03312b4a0c9961f0d99771f53"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34f5917f0c49a04633dc12d483c8aee6f6d9f69133b700214d3703f72a72f501"},
{file = "grpcio-1.48.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4c4ad8ad7e2cf3a272cbc96734d56635e6543939022f17e0c4487f7d2a45bf9"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:111fb2f5f4a069f331ae23106145fd16dd4e1112ca223858a922068614dac6d2"},
{file = "grpcio-1.48.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:beb0573daa49889efcfea0a6e995b4f39d481aa1b94e1257617406ef417b56a6"},
{file = "grpcio-1.48.0-cp39-cp39-win32.whl", hash = "sha256:ce70254a082cb767217b2fdee374cc79199d338d46140753438cd6d67c609b2f"},
{file = "grpcio-1.48.0-cp39-cp39-win_amd64.whl", hash = "sha256:ae3fd135666448058fe277d93c10e0f18345fbcbb015c4642de2fa3db6f0c205"},
{file = "grpcio-1.48.0.tar.gz", hash = "sha256:eaf4bb73819863440727195411ab3b5c304f6663625e66f348e91ebe0a039306"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-4.12.0-py3-none-any.whl", hash = "sha256:7401a975809ea1fdc658c3aa4f78cc2195a0e019c5cbc4c06122884e9ae80c23"},
{file = "importlib_metadata-4.12.0.tar.gz", hash = "sha256:637245b8bab2b6502fcbc752cc4b7a6f6243bb02b31c5c26156ad103d3d45670"},
]
importlib-resources = [
{file = "importlib_resources-5.9.0-py3-none-any.whl", hash = "sha256:f78a8df21a79bcc30cfd400bdc38f314333de7c0fb619763f6b9dabab8268bb7"},
{file = "importlib_resources-5.9.0.tar.gz", hash = "sha256:5481e97fb45af8dcf2f798952625591c58fe599d0735d86b10f54de086a61681"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.15.1-py3-none-any.whl", hash = "sha256:d8969c5b23b0e453a23166da5a669c954db399789293fcb03fec5cb25367e43c"},
{file = "ipykernel-6.15.1.tar.gz", hash = "sha256:37acc3254caa8a0dafcddddc8dc863a60ad1b46487b68aee361d9a15bda98112"},
]
ipython = [
{file = "ipython-8.4.0-py3-none-any.whl", hash = "sha256:7ca74052a38fa25fe9bedf52da0be7d3fdd2fb027c3b778ea78dfe8c212937d1"},
{file = "ipython-8.4.0.tar.gz", hash = "sha256:f2db3a10254241d9b447232cec8b424847f338d9d36f9a577a6192c332a46abd"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.1-py3-none-any.whl", hash = "sha256:fc0744df3a964ecfd68a6d2debe547fe89db252b8d7bb3db5740aba72edb0e6c"},
{file = "ipywidgets-8.0.1.tar.gz", hash = "sha256:1a296094203309e834f2781a275214d255ac5d266bbfa602f9f6915e1806614c"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.1-py2.py3-none-any.whl", hash = "sha256:637c9635fcf47945ceb91cd7f320234a7be540ded6f3e99a50cb6febdfd1ba8d"},
{file = "jedi-0.18.1.tar.gz", hash = "sha256:74137626a64a99c8eb6ae5832d99b3bdd7d29a3850fe2aa80a4126b2a7d949ab"},
]
jeepney = [
{file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
{file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
joblib = [
{file = "joblib-1.1.0-py2.py3-none-any.whl", hash = "sha256:f21f109b3c7ff9d95f8387f752d0d9c34a02aa2f7060c2135f465da0e5160ff6"},
{file = "joblib-1.1.0.tar.gz", hash = "sha256:4158fcecd13733f8be669be0683b96ebdbbd38d23559f54dca7205aea1bf1e35"},
]
jsonschema = [
{file = "jsonschema-4.13.0-py3-none-any.whl", hash = "sha256:870a61bb45050b81103faf6a4be00a0a906e06636ffcf0b84f5a2e51faf901ff"},
{file = "jsonschema-4.13.0.tar.gz", hash = "sha256:3776512df4f53f74e6e28fe35717b5b223c1756875486984a31bc9165e7fc920"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.3.4-py3-none-any.whl", hash = "sha256:17d74b0d0a7b24f1c8c527b24fcf4607c56bee542ffe8e3418e50b21e514b621"},
{file = "jupyter_client-7.3.4.tar.gz", hash = "sha256:aa9a6c32054b290374f95f73bb0cae91455c58dfb84f65c8591912b8f65e6d56"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-4.11.1-py3-none-any.whl", hash = "sha256:715e22bb6cc7db3718fddfac1f69f1c7e899ca00e42bdfd4bf3705452b9fd84a"},
{file = "jupyter_core-4.11.1.tar.gz", hash = "sha256:2e5f244d44894c4154d06aeae3419dd7f1b0ef4494dc5584929b398c61cfd314"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.2-py3-none-any.whl", hash = "sha256:98303a281f4004670cdcea2ef4aecba19c580adc297664c593f967025625c8c5"},
{file = "jupyterlab_widgets-3.0.2.tar.gz", hash = "sha256:47ab54cd165aa0cb3bcef1232d77471580cd2c36bbe2153fc5ba31e26ad87320"},
]
keras = [
{file = "keras-2.9.0-py2.py3-none-any.whl", hash = "sha256:55911256f89cfc9343c9fbe4b61ec45a2d33d89729cbe1ab9dcacf8b07b8b6ab"},
]
keras-preprocessing = [
{file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"},
{file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"},
]
keyring = [
{file = "keyring-23.8.2-py3-none-any.whl", hash = "sha256:10d2a8639663fe2090705a00b8c47c687cacdf97598ea9c11456679fa974473a"},
{file = "keyring-23.8.2.tar.gz", hash = "sha256:0d9973f8891850f1ade5f26aafd06bb16865fbbae3fc56b0defb6a14a2624003"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.2-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl", hash = "sha256:2e94bd1b3ab29d173102c9c1d80db2e27ad7e43b8ff5a74c5cb7984b37d19f45"},
{file = "lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl", hash = "sha256:f4cba3b4f29336ad7e801cb32d9b948ea4cc5300dda650b78bcdfe36b3e2c4b2"},
{file = "lightgbm-3.3.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8e788c56853316fc5d35db726d81bd002c721038c856853952287f68082e0158"},
{file = "lightgbm-3.3.2-py3-none-win_amd64.whl", hash = "sha256:e4f1529cad416066964f9af0efad208787861e9f2181b7f9ee7fc9bacc082d4f"},
{file = "lightgbm-3.3.2.tar.gz", hash = "sha256:5d25d16e77c844c297ece2044df57651139bc3c8ad8c4108916374267ac68b64"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
{file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
{file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
{file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
{file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
{file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
{file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
{file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
{file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
{file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
{file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
{file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
{file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
{file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
{file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
{file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
{file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
{file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
{file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclient = [
{file = "nbclient-0.6.6-py3-none-any.whl", hash = "sha256:09bae4ea2df79fa6bc50aeb8278d8b79d2036792824337fa6eee834afae17312"},
{file = "nbclient-0.6.6.tar.gz", hash = "sha256:0df76a7961d99a681b4796c74a1f2553b9f998851acc01896dce064ad19a9027"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.4.0-py3-none-any.whl", hash = "sha256:0d6072aaec95dddc39735c144ee8bbc6589c383fb462e4058abc855348152dad"},
{file = "nbformat-5.4.0.tar.gz", hash = "sha256:44ba5ca6acb80c5d5a500f1e5b83ede8cbe364d5a495c4c8cf60aaf1ba656501"},
]
nbsphinx = [
{file = "nbsphinx-0.8.9-py3-none-any.whl", hash = "sha256:a7d743762249ee6bac3350a91eb3717a6e1c75f239f2c2a85491f9aca5a63be1"},
{file = "nbsphinx-0.8.9.tar.gz", hash = "sha256:4ade86b2a41f8f41efd3ea99dae84c3368fe8ba3f837d50c8815ce9424c5994f"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.5-py3-none-any.whl", hash = "sha256:b98e3ec1b246135e4642eceffa5a6c23a3ab12c82ff816a92c612d68205813b2"},
{file = "nest_asyncio-1.5.5.tar.gz", hash = "sha256:e442291cd942698be619823a17a86a5759eabe1f8613084790de189fe9e16d65"},
]
networkx = [
{file = "networkx-2.8.5-py3-none-any.whl", hash = "sha256:a762f4b385692d9c3a6f2912d058d76d29a827deaedf9e63ed14d397b8030687"},
{file = "networkx-2.8.5.tar.gz", hash = "sha256:15a7b81a360791c458c55a417418ea136c13378cfdc06a2dcdc12bd2f9cf09c1"},
]
notebook = [
{file = "notebook-6.4.12-py3-none-any.whl", hash = "sha256:8c07a3bb7640e371f8a609bdbb2366a1976c6a2589da8ef917f761a61e3ad8b1"},
{file = "notebook-6.4.12.tar.gz", hash = "sha256:6268c9ec9048cff7a45405c990c29ac9ca40b0bc3ec29263d218c5e01f2b4e86"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e603ca1fb47b913942f3e660a15e55a9ebca906857edfea476ae5f0fe9b457d5"},
{file = "numpy-1.23.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:633679a472934b1c20a12ed0c9a6c9eb167fbb4cb89031939bfd03dd9dbc62b8"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e5226674f6ea79e14e3b91bfbc153fdf3ac13f5cc54ee7bc8fdbe820a32da0"},
{file = "numpy-1.23.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdc02c0235b261925102b1bd586579b7158e9d0d07ecb61148a1799214a4afd5"},
{file = "numpy-1.23.2-cp310-cp310-win32.whl", hash = "sha256:df28dda02c9328e122661f399f7655cdcbcf22ea42daa3650a26bce08a187450"},
{file = "numpy-1.23.2-cp310-cp310-win_amd64.whl", hash = "sha256:8ebf7e194b89bc66b78475bd3624d92980fca4e5bb86dda08d677d786fefc414"},
{file = "numpy-1.23.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dc76bca1ca98f4b122114435f83f1fcf3c0fe48e4e6f660e07996abf2f53903c"},
{file = "numpy-1.23.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ecfdd68d334a6b97472ed032b5b37a30d8217c097acfff15e8452c710e775524"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5593f67e66dea4e237f5af998d31a43e447786b2154ba1ad833676c788f37cde"},
{file = "numpy-1.23.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac987b35df8c2a2eab495ee206658117e9ce867acf3ccb376a19e83070e69418"},
{file = "numpy-1.23.2-cp311-cp311-win32.whl", hash = "sha256:d98addfd3c8728ee8b2c49126f3c44c703e2b005d4a95998e2167af176a9e722"},
{file = "numpy-1.23.2-cp311-cp311-win_amd64.whl", hash = "sha256:8ecb818231afe5f0f568c81f12ce50f2b828ff2b27487520d85eb44c71313b9e"},
{file = "numpy-1.23.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:909c56c4d4341ec8315291a105169d8aae732cfb4c250fbc375a1efb7a844f8f"},
{file = "numpy-1.23.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8247f01c4721479e482cc2f9f7d973f3f47810cbc8c65e38fd1bbd3141cc9842"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8b97a8a87cadcd3f94659b4ef6ec056261fa1e1c3317f4193ac231d4df70215"},
{file = "numpy-1.23.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd5b7ccae24e3d8501ee5563e82febc1771e73bd268eef82a1e8d2b4d556ae66"},
{file = "numpy-1.23.2-cp38-cp38-win32.whl", hash = "sha256:9b83d48e464f393d46e8dd8171687394d39bc5abfe2978896b77dc2604e8635d"},
{file = "numpy-1.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:dec198619b7dbd6db58603cd256e092bcadef22a796f778bf87f8592b468441d"},
{file = "numpy-1.23.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f41f5bf20d9a521f8cab3a34557cd77b6f205ab2116651f12959714494268b0"},
{file = "numpy-1.23.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:806cc25d5c43e240db709875e947076b2826f47c2c340a5a2f36da5bb10c58d6"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f9d84a24889ebb4c641a9b99e54adb8cab50972f0166a3abc14c3b93163f074"},
{file = "numpy-1.23.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c403c81bb8ffb1c993d0165a11493fd4bf1353d258f6997b3ee288b0a48fce77"},
{file = "numpy-1.23.2-cp39-cp39-win32.whl", hash = "sha256:cf8c6aed12a935abf2e290860af8e77b26a042eb7f2582ff83dc7ed5f963340c"},
{file = "numpy-1.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:5e28cd64624dc2354a349152599e55308eb6ca95a13ce6a7d5679ebff2962913"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:806970e69106556d1dd200e26647e9bee5e2b3f1814f9da104a943e8d548ca38"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd879d3ca4b6f39b7770829f73278b7c5e248c91d538aab1e506c628353e47f"},
{file = "numpy-1.23.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:be6b350dfbc7f708d9d853663772a9310783ea58f6035eec649fb9c4371b5389"},
{file = "numpy-1.23.2.tar.gz", hash = "sha256:b78d00e48261fbbd04aa0d7427cf78d18401ee0abd89c7559bbf422e5b1c7d01"},
]
oauthlib = [
{file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"},
{file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d51674ed8e2551ef7773820ef5dab9322be0828629f2cbf8d1fc31a0c4fed640"},
{file = "pandas-1.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ad23db55efcc93fa878f7837267973b61ea85d244fc5ff0ccbcfa5638706c5"},
{file = "pandas-1.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:958a0588149190c22cdebbc0797e01972950c927a11a900fe6c2296f207b1d6f"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e48fbb64165cda451c06a0f9e4c7a16b534fcabd32546d531b3c240ce2844112"},
{file = "pandas-1.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f803320c9da732cc79210d7e8cc5c8019aad512589c910c66529eb1b1818230"},
{file = "pandas-1.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2893e923472a5e090c2d5e8db83e8f907364ec048572084c7d10ef93546be6d1"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:24ea75f47bbd5574675dae21d51779a4948715416413b30614c1e8b480909f81"},
{file = "pandas-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ebc990bd34f4ac3c73a2724c2dcc9ee7bf1ce6cf08e87bb25c6ad33507e318"},
{file = "pandas-1.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d6c0106415ff1a10c326c49bc5dd9ea8b9897a6ca0c8688eb9c30ddec49535ef"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78b00429161ccb0da252229bcda8010b445c4bf924e721265bec5a6e96a92e92"},
{file = "pandas-1.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6dfbf16b1ea4f4d0ee11084d9c026340514d1d30270eaa82a9f1297b6c8ecbf0"},
{file = "pandas-1.4.3-cp38-cp38-win32.whl", hash = "sha256:48350592665ea3cbcd07efc8c12ff12d89be09cd47231c7925e3b8afada9d50d"},
{file = "pandas-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:605d572126eb4ab2eadf5c59d5d69f0608df2bf7bcad5c5880a47a20a0699e3e"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a3924692160e3d847e18702bb048dc38e0e13411d2b503fecb1adf0fcf950ba4"},
{file = "pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07238a58d7cbc8a004855ade7b75bbd22c0db4b0ffccc721556bab8a095515f6"},
{file = "pandas-1.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:755679c49460bd0d2f837ab99f0a26948e68fa0718b7e42afbabd074d945bf84"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41fc406e374590a3d492325b889a2686b31e7a7780bec83db2512988550dadbf"},
{file = "pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d9382f72a4f0e93909feece6fef5500e838ce1c355a581b3d8f259839f2ea76"},
{file = "pandas-1.4.3-cp39-cp39-win32.whl", hash = "sha256:0daf876dba6c622154b2e6741f29e87161f844e64f84801554f879d27ba63c0d"},
{file = "pandas-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:721a3dd2f06ef942f83a819c0f3f6a648b2830b191a72bbe9451bcd49c3bd42e"},
{file = "pandas-1.4.3.tar.gz", hash = "sha256:2ff7788468e75917574f080cd4681b27e1a7bf36461fe968b49a87b5a54d007c"},
]
pandoc = [
{file = "pandoc-2.2.tar.gz", hash = "sha256:d063ee252f2761010f16cf3a14912ad9245187c24c56f4b1ad9696e104fe6e1e"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
patsy = [
{file = "patsy-0.5.2-py2.py3-none-any.whl", hash = "sha256:cc80955ae8c13a7e7c4051eda7b277c8f909f50bc7d73e124bc38e2ee3d95041"},
{file = "patsy-0.5.2.tar.gz", hash = "sha256:5053de7804676aba62783dbb0f23a2b3d74e35e5bfa238b88b7cbf148a38b69d"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.2.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:a9c9bc489f8ab30906d7a85afac4b4944a572a7432e00698a7239f44a44e6efb"},
{file = "Pillow-9.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:510cef4a3f401c246cfd8227b300828715dd055463cdca6176c2e4036df8bd4f"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7888310f6214f19ab2b6df90f3f06afa3df7ef7355fc025e78a3044737fab1f5"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831e648102c82f152e14c1a0938689dbb22480c548c8d4b8b248b3e50967b88c"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1cc1d2451e8a3b4bfdb9caf745b58e6c7a77d2e469159b0d527a4554d73694d1"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:136659638f61a251e8ed3b331fc6ccd124590eeff539de57c5f80ef3a9594e58"},
{file = "Pillow-9.2.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6e8c66f70fb539301e064f6478d7453e820d8a2c631da948a23384865cd95544"},
{file = "Pillow-9.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:37ff6b522a26d0538b753f0b4e8e164fdada12db6c6f00f62145d732d8a3152e"},
{file = "Pillow-9.2.0-cp310-cp310-win32.whl", hash = "sha256:c79698d4cd9318d9481d89a77e2d3fcaeff5486be641e60a4b49f3d2ecca4e28"},
{file = "Pillow-9.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:254164c57bab4b459f14c64e93df11eff5ded575192c294a0c49270f22c5d93d"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:408673ed75594933714482501fe97e055a42996087eeca7e5d06e33218d05aa8"},
{file = "Pillow-9.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:727dd1389bc5cb9827cbd1f9d40d2c2a1a0c9b32dd2261db522d22a604a6eec9"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50dff9cc21826d2977ef2d2a205504034e3a4563ca6f5db739b0d1026658e004"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb6259196a589123d755380b65127ddc60f4c64b21fc3bb46ce3a6ea663659b0"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b0554af24df2bf96618dac71ddada02420f946be943b181108cac55a7a2dcd4"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:15928f824870535c85dbf949c09d6ae7d3d6ac2d6efec80f3227f73eefba741c"},
{file = "Pillow-9.2.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:bdd0de2d64688ecae88dd8935012c4a72681e5df632af903a1dca8c5e7aa871a"},
{file = "Pillow-9.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5b87da55a08acb586bad5c3aa3b86505f559b84f39035b233d5bf844b0834b1"},
{file = "Pillow-9.2.0-cp311-cp311-win32.whl", hash = "sha256:b6d5e92df2b77665e07ddb2e4dbd6d644b78e4c0d2e9272a852627cdba0d75cf"},
{file = "Pillow-9.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:6bf088c1ce160f50ea40764f825ec9b72ed9da25346216b91361eef8ad1b8f8c"},
{file = "Pillow-9.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:2c58b24e3a63efd22554c676d81b0e57f80e0a7d3a5874a7e14ce90ec40d3069"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eef7592281f7c174d3d6cbfbb7ee5984a671fcd77e3fc78e973d492e9bf0eb3f"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dcd7b9c7139dc8258d164b55696ecd16c04607f1cc33ba7af86613881ffe4ac8"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a138441e95562b3c078746a22f8fca8ff1c22c014f856278bdbdd89ca36cff1b"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:93689632949aff41199090eff5474f3990b6823404e45d66a5d44304e9cdc467"},
{file = "Pillow-9.2.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:f3fac744f9b540148fa7715a435d2283b71f68bfb6d4aae24482a890aed18b59"},
{file = "Pillow-9.2.0-cp37-cp37m-win32.whl", hash = "sha256:fa768eff5f9f958270b081bb33581b4b569faabf8774726b283edb06617101dc"},
{file = "Pillow-9.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:69bd1a15d7ba3694631e00df8de65a8cb031911ca11f44929c97fe05eb9b6c1d"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:030e3460861488e249731c3e7ab59b07c7853838ff3b8e16aac9561bb345da14"},
{file = "Pillow-9.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:74a04183e6e64930b667d321524e3c5361094bb4af9083db5c301db64cd341f3"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d33a11f601213dcd5718109c09a52c2a1c893e7461f0be2d6febc2879ec2402"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fd6f5e3c0e4697fa7eb45b6e93996299f3feee73a3175fa451f49a74d092b9f"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a647c0d4478b995c5e54615a2e5360ccedd2f85e70ab57fbe817ca613d5e63b8"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:4134d3f1ba5f15027ff5c04296f13328fecd46921424084516bdb1b2548e66ff"},
{file = "Pillow-9.2.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:bc431b065722a5ad1dfb4df354fb9333b7a582a5ee39a90e6ffff688d72f27a1"},
{file = "Pillow-9.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1536ad017a9f789430fb6b8be8bf99d2f214c76502becc196c6f2d9a75b01b76"},
{file = "Pillow-9.2.0-cp38-cp38-win32.whl", hash = "sha256:2ad0d4df0f5ef2247e27fc790d5c9b5a0af8ade9ba340db4a73bb1a4a3e5fb4f"},
{file = "Pillow-9.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:ec52c351b35ca269cb1f8069d610fc45c5bd38c3e91f9ab4cbbf0aebc136d9c8"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0ed2c4ef2451de908c90436d6e8092e13a43992f1860275b4d8082667fbb2ffc"},
{file = "Pillow-9.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ad2f835e0ad81d1689f1b7e3fbac7b01bb8777d5a985c8962bedee0cc6d43da"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea98f633d45f7e815db648fd7ff0f19e328302ac36427343e4432c84432e7ff4"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7761afe0126d046974a01e030ae7529ed0ca6a196de3ec6937c11df0df1bc91c"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a54614049a18a2d6fe156e68e188da02a046a4a93cf24f373bffd977e943421"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:5aed7dde98403cd91d86a1115c78d8145c83078e864c1de1064f52e6feb61b20"},
{file = "Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:13b725463f32df1bfeacbf3dd197fb358ae8ebcd8c5548faa75126ea425ccb60"},
{file = "Pillow-9.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:808add66ea764ed97d44dda1ac4f2cfec4c1867d9efb16a33d158be79f32b8a4"},
{file = "Pillow-9.2.0-cp39-cp39-win32.whl", hash = "sha256:337a74fd2f291c607d220c793a8135273c4c2ab001b03e601c36766005f36885"},
{file = "Pillow-9.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:fac2d65901fb0fdf20363fbd345c01958a742f2dc62a8dd4495af66e3ff502a4"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ad2277b185ebce47a63f4dc6302e30f05762b688f8dc3de55dbae4651872cdf3"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c7b502bc34f6e32ba022b4a209638f9e097d7a9098104ae420eb8186217ebbb"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d1f14f5f691f55e1b47f824ca4fdcb4b19b4323fe43cc7bb105988cad7496be"},
{file = "Pillow-9.2.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:dfe4c1fedfde4e2fbc009d5ad420647f7730d719786388b7de0999bf32c0d9fd"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:f07f1f00e22b231dd3d9b9208692042e29792d6bd4f6639415d2f23158a80013"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1802f34298f5ba11d55e5bb09c31997dc0c6aed919658dfdf0198a2fe75d5490"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17d4cafe22f050b46d983b71c707162d63d796a1235cdf8b9d7a112e97b15bac"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:96b5e6874431df16aee0c1ba237574cb6dff1dcb173798faa6a9d8b399a05d0e"},
{file = "Pillow-9.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0030fdbd926fb85844b8b92e2f9449ba89607231d3dd597a21ae72dc7fe26927"},
{file = "Pillow-9.2.0.tar.gz", hash = "sha256:75e636fd3e0fb872693f23ccb8a5ff2cd578801251f3a4f6854c6a5d437d3c04"},
]
pkginfo = [
{file = "pkginfo-1.8.3-py2.py3-none-any.whl", hash = "sha256:848865108ec99d4901b2f7e84058b6e7660aae8ae10164e015a6dcf5b242a594"},
{file = "pkginfo-1.8.3.tar.gz", hash = "sha256:a84da4318dd86f870a9447a8c98340aa06216bfc6f2b7bdc4b8766984ae1867c"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
{file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
plumbum = [
{file = "plumbum-1.7.2-py2.py3-none-any.whl", hash = "sha256:0bbf431e31da988405de2fb36c3226f09c0c9cdf69c8480f8997f4b94b7370a1"},
{file = "plumbum-1.7.2.tar.gz", hash = "sha256:0d1bf908076bbd0484d16412479cb97d6843069ee19f99e267e11dd980040523"},
]
ply = [
{file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
{file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
]
poethepoet = [
{file = "poethepoet-0.16.0-py3-none-any.whl", hash = "sha256:87482ea8bba4e5db4abbd8e6360baee73b2ce0f3d5f5e99e81cdfa39d72d118f"},
{file = "poethepoet-0.16.0.tar.gz", hash = "sha256:6455aec39f198be92dbf210a4416e1635119e967204c092b431c8b10024db1d1"},
]
progressbar2 = [
{file = "progressbar2-4.0.0-py2.py3-none-any.whl", hash = "sha256:2562ba3e554433f08e81fb7b786208b19de135f3ca1c5da1787d9b05558e6247"},
{file = "progressbar2-4.0.0.tar.gz", hash = "sha256:14d3165a1781d053ffaa117daf27cc706128d2ec1d2977fdb05b6bb079888013"},
]
prometheus-client = [
{file = "prometheus_client-0.14.1-py3-none-any.whl", hash = "sha256:522fded625282822a89e2773452f42df14b5a8e84a86433e3f8a189c1d54dc01"},
{file = "prometheus_client-0.14.1.tar.gz", hash = "sha256:5459c427624961076277fdc6dc50540e2bacb98eebde99886e59ec55ed92093a"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.30-py3-none-any.whl", hash = "sha256:d8916d3f62a7b67ab353a952ce4ced6a1d2587dfe9ef8ebc30dd7c386751f289"},
{file = "prompt_toolkit-3.0.30.tar.gz", hash = "sha256:859b283c50bde45f5f97829f77a4674d1c1fcd88539364f1b28a37805cfd89c0"},
]
protobuf = [
{file = "protobuf-3.19.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f51d5a9f137f7a2cec2d326a74b6e3fc79d635d69ffe1b036d39fc7d75430d37"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:09297b7972da685ce269ec52af761743714996b4381c085205914c41fcab59fb"},
{file = "protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072fbc78d705d3edc7ccac58a62c4c8e0cec856987da7df8aca86e647be4e35c"},
{file = "protobuf-3.19.4-cp310-cp310-win32.whl", hash = "sha256:7bb03bc2873a2842e5ebb4801f5c7ff1bfbdf426f85d0172f7644fcda0671ae0"},
{file = "protobuf-3.19.4-cp310-cp310-win_amd64.whl", hash = "sha256:f358aa33e03b7a84e0d91270a4d4d8f5df6921abe99a377828839e8ed0c04e07"},
{file = "protobuf-3.19.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1c91ef4110fdd2c590effb5dca8fdbdcb3bf563eece99287019c4204f53d81a4"},
{file = "protobuf-3.19.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c438268eebb8cf039552897d78f402d734a404f1360592fef55297285f7f953f"},
{file = "protobuf-3.19.4-cp36-cp36m-win32.whl", hash = "sha256:835a9c949dc193953c319603b2961c5c8f4327957fe23d914ca80d982665e8ee"},
{file = "protobuf-3.19.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4276cdec4447bd5015453e41bdc0c0c1234eda08420b7c9a18b8d647add51e4b"},
{file = "protobuf-3.19.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6cbc312be5e71869d9d5ea25147cdf652a6781cf4d906497ca7690b7b9b5df13"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:54a1473077f3b616779ce31f477351a45b4fef8c9fd7892d6d87e287a38df368"},
{file = "protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:435bb78b37fc386f9275a7035fe4fb1364484e38980d0dd91bc834a02c5ec909"},
{file = "protobuf-3.19.4-cp37-cp37m-win32.whl", hash = "sha256:16f519de1313f1b7139ad70772e7db515b1420d208cb16c6d7858ea989fc64a9"},
{file = "protobuf-3.19.4-cp37-cp37m-win_amd64.whl", hash = "sha256:cdc076c03381f5c1d9bb1abdcc5503d9ca8b53cf0a9d31a9f6754ec9e6c8af0f"},
{file = "protobuf-3.19.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:69da7d39e39942bd52848438462674c463e23963a1fdaa84d88df7fbd7e749b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:48ed3877fa43e22bcacc852ca76d4775741f9709dd9575881a373bd3e85e54b2"},
{file = "protobuf-3.19.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd95d1dfb9c4f4563e6093a9aa19d9c186bf98fa54da5252531cc0d3a07977e7"},
{file = "protobuf-3.19.4-cp38-cp38-win32.whl", hash = "sha256:b38057450a0c566cbd04890a40edf916db890f2818e8682221611d78dc32ae26"},
{file = "protobuf-3.19.4-cp38-cp38-win_amd64.whl", hash = "sha256:7ca7da9c339ca8890d66958f5462beabd611eca6c958691a8fe6eccbd1eb0c6e"},
{file = "protobuf-3.19.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:36cecbabbda242915529b8ff364f2263cd4de7c46bbe361418b5ed859677ba58"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:c1068287025f8ea025103e37d62ffd63fec8e9e636246b89c341aeda8a67c934"},
{file = "protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96bd766831596d6014ca88d86dc8fe0fb2e428c0b02432fd9db3943202bf8c5e"},
{file = "protobuf-3.19.4-cp39-cp39-win32.whl", hash = "sha256:84123274d982b9e248a143dadd1b9815049f4477dc783bf84efe6250eb4b836a"},
{file = "protobuf-3.19.4-cp39-cp39-win_amd64.whl", hash = "sha256:3112b58aac3bac9c8be2b60a9daf6b558ca3f7681c130dcdd788ade7c9ffbdca"},
{file = "protobuf-3.19.4-py2.py3-none-any.whl", hash = "sha256:8961c3a78ebfcd000920c9060a262f082f29838682b1f7201889300c1fbe0616"},
{file = "protobuf-3.19.4.tar.gz", hash = "sha256:9df0c10adf3e83015ced42a9a7bd64e13d06c4cf45c340d2c63020ea04499d0a"},
]
psutil = [
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"},
{file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"},
{file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"},
{file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"},
{file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"},
{file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"},
{file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"},
{file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"},
{file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"},
{file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"},
{file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"},
{file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"},
{file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"},
{file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"},
{file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"},
{file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"},
{file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"},
{file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"},
{file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"},
{file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"},
{file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"},
{file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"},
{file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"},
{file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"},
{file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"},
{file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.9.zip", hash = "sha256:fa18f7c6cea28341a4e466ed0cf05682b0a68288afe8dd7c9426782f7c1ae01c"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.1.tar.gz", hash = "sha256:d7c049eb2e7485a612b4dd99c24c309cc860c7cbc6b1973387034f5436d1c8d6"},
{file = "pyro_ppl-1.8.1-py3-none-any.whl", hash = "sha256:ca01ab4565eb9a1af4a60dbc481da5cb6f5fe5a72efa19e83638e03683efbca6"},
]
pyrsistent = [
{file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"},
{file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"},
{file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"},
{file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"},
{file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"},
{file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"},
{file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"},
{file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"},
{file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"},
{file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"},
{file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"},
{file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"},
{file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"},
{file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"},
{file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"},
{file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"},
]
pytest = [
{file = "pytest-7.1.2-py3-none-any.whl", hash = "sha256:13d0e3ccfc2b6e26be000cb6568c832ba67ba32e719443bfe725814d3c42433c"},
{file = "pytest-7.1.2.tar.gz", hash = "sha256:a06a0425453864a270bc45e71f783330a7428defb4230fb5e6a731fde06ecd45"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.3.3.tar.gz", hash = "sha256:3b1c8b706e40e91280eec5fa72ea730880a166cee99afa5555b863d55664478d"},
{file = "python_utils-3.3.3-py2.py3-none-any.whl", hash = "sha256:5cb9cf295018202fb4d6abdd694a33a7f08bc0ce1bf1eab8cce80ab2cde35c07"},
]
pytz = [
{file = "pytz-2022.2.1-py2.py3-none-any.whl", hash = "sha256:220f481bdafa09c3955dfbdddb7b57780e9a94f5127e35456a48589b9e0c0197"},
{file = "pytz-2022.2.1.tar.gz", hash = "sha256:cea221417204f2d1a2aa03ddae3e867921971d0d76f14d87abb4414415bbdcf5"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"},
{file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"},
{file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"},
{file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"},
{file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"},
{file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"},
{file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"},
{file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"},
{file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"},
{file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"},
{file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"},
{file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"},
{file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"},
{file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"},
]
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
]
pywinpty = [
{file = "pywinpty-2.0.7-cp310-none-win_amd64.whl", hash = "sha256:d56361ed2bd3395347882a7a4e6246359e745a233e89c91786ab3d9421323c17"},
{file = "pywinpty-2.0.7-cp37-none-win_amd64.whl", hash = "sha256:2d62ede3ed10feb0901b3b4667201766a741b6a2c69f27be623ba9fe9348447b"},
{file = "pywinpty-2.0.7-cp38-none-win_amd64.whl", hash = "sha256:c3b7e6a2f0e5f86e0dc5cb5e4fec7de19adacc6900232e4a48a2ecf04bae447f"},
{file = "pywinpty-2.0.7-cp39-none-win_amd64.whl", hash = "sha256:80a6713a586401c2a19efd2969ffd019eb85f18442611a3880e3d618887d2f84"},
{file = "pywinpty-2.0.7.tar.gz", hash = "sha256:f52b2e51c46dac40708ede1d42577f3ddb9d7cf8acaa36c8e27b3d3b975f4c95"},
]
pyzmq = [
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:a3fd44b5046d247e7f0f1660bcafe7b5fb0db55d0934c05dd57dda9e1f823ce7"},
{file = "pyzmq-23.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2141e6798d5981be04c08996d27962086a1aa3ea536fe9cf7e89817fd4523f86"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9a39ddb0431a68954bd318b923230fa5b649c9c62b0e8340388820c5f1b15bd2"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e06747014a5ad1b28cebf5bc1ddcdaccfb44e9b441d35e6feb1286c8a72e54be"},
{file = "pyzmq-23.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e0113d70b095339e99bb522fe7294f5ae6a7f3b2b8f52f659469a74b5cc7661"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:71b32a1e827bdcbf73750e60370d3b07685816ff3d8695f450f0f8c3226503f8"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:55568a020ad2cae9ae36da6058e7ca332a56df968f601cbdb7cf6efb2a77579a"},
{file = "pyzmq-23.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c02a0cd39dc01659b3d6cb70bb3a41aebd9885fd78239acdd8d9c91351c4568"},
{file = "pyzmq-23.2.1-cp310-cp310-win32.whl", hash = "sha256:e1fe30bcd5aea5948c42685fad910cd285eacb2518ea4dc6c170d6b535bee95d"},
{file = "pyzmq-23.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:650389bbfca73955b262b2230423d89992f38ec48033307ae80e700eaa2fbb63"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e753eee6d3b93c5354e8ba0a1d62956ee49355f0a36e00570823ef64e66183f5"},
{file = "pyzmq-23.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f07016e3cf088dbfc6e7c5a7b3f540db5c23b0190d539e4fd3e2b5e6beffa4b5"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4805af9614b0b41b7e57d17673459facf85604dac502a5a9244f6e8c9a4de658"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39dd252b683816935702825e5bf775df16090619ced9bb4ba68c2d0b6f0c9b18"},
{file = "pyzmq-23.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:84678153432241bcdca2210cf4ff83560b200556867aea913ffbb960f5d5f340"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:90d88f9d9a2ae6cfb1dc4ea2d1710cdf6456bc1b9a06dd1bb485c5d298f2517e"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:794871988c34727c7f79bdfe2546e6854ae1fa2e1feb382784f23a9c6c63ecb3"},
{file = "pyzmq-23.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c56b1a62a1fb87565343c57b6743fd5da6e138b8c6562361d7d9b5ce4acf399a"},
{file = "pyzmq-23.2.1-cp311-cp311-win32.whl", hash = "sha256:c3ebf1668664d20c8f7d468955f18379b7d1f7bc8946b13243d050fa3888c7ff"},
{file = "pyzmq-23.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:ec9803aca9491fd6f0d853d2a6147f19f8deaaa23b1b713d05c5d09e56ea7142"},
{file = "pyzmq-23.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:385609812eafd9970c3752c51f2f6c4f224807e3e441bcfd8c8273877d00c8a8"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b861db65f6b8906c8d6db51dde2448f266f0c66bf28db2c37aea50f58a849859"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6b1e79bba24f6df1712e3188d5c32c480d8eda03e8ecff44dc8ecb0805fa62f3"},
{file = "pyzmq-23.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8dc66f109a245653b19df0f44a5af7a3f14cb8ad6c780ead506158a057bd36ce"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:b815991c7d024bf461f358ad871f2be1135576274caed5749c4828859e40354e"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:29b74774a0bfd3c4d98ac853f0bdca55bd9ec89d5b0def5486407cca54472ef8"},
{file = "pyzmq-23.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:4bb798bef181648827019001f6be43e1c48b34b477763b37a8d27d8c06d197b8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win32.whl", hash = "sha256:565bd5ab81f6964fc4067ccf2e00877ad0fa917308975694bbb54378389215f8"},
{file = "pyzmq-23.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:1f368a82b29f80071781b20663c0fc0c8f6b13273f9f5abe1526af939534f90f"},
{file = "pyzmq-23.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c9cfaf530e6a7ff65f0afe275e99f983f68b54dfb23ea401f0bc297a632766b6"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c558b50402fca1acc94329c5d8f12aa429738904a5cfb32b9ed3c61235221bb"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:20bafc4095eab00f41a510579363a3f5e1f5c69d7ee10f1d88895c4df0259183"},
{file = "pyzmq-23.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f619fd38fc2641abfb53cca719c165182500600b82c695cc548a0f05f764be05"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:044447ae4b2016a6b8697571fd633f799f860b19b76c4a2fd9b1140d52ee6745"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:49d30ba7074f469e8167917abf9eb854c6503ae10153034a6d4df33618f1db5f"},
{file = "pyzmq-23.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:48400b96788cdaca647021bf19a9cd668384f46e4d9c55cf045bdd17f65299c8"},
{file = "pyzmq-23.2.1-cp37-cp37m-win32.whl", hash = "sha256:8a68f57b7a3f7b6b52ada79876be1efb97c8c0952423436e84d70cc139f16f0d"},
{file = "pyzmq-23.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9e5bf6e7239fc9687239de7a283aa8b801ab85371116045b33ae20132a1325d6"},
{file = "pyzmq-23.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffc6b1623d0f9affb351db4ca61f432dca3628a5ee015f9bf2bfbe9c6836881c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d6f110c56f7d5b4d64dde3a382ae61b6d48174e30742859d8e971b18b6c9e5c"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9269fbfe3a4eb2009199120861c4571ef1655fdf6951c3e7f233567c94e8c602"},
{file = "pyzmq-23.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:12e62ff0d5223ec09b597ab6d73858b9f64a51221399f3cb08aa495e1dff7935"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6fd5d0d50cbcf4bc376861529a907bed026a4cbe8c22a500ff8243231ef02433"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9d0ab2936085c85a1fc6f9fd8f89d5235ae99b051e90ec5baa5e73ad44346e1f"},
{file = "pyzmq-23.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:022cf5ea7bcaa8a06a03c2706e0ae66904b6138b2155577cd34c64bc7cc637ab"},
{file = "pyzmq-23.2.1-cp38-cp38-win32.whl", hash = "sha256:28dbdb90b2f6b131f8f10e6081012e4e25234213433420e67e0c1162de537113"},
{file = "pyzmq-23.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:10d1910ec381b851aeb024a042a13db178cb1edf125e76a4e9d2548ad103aadb"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:99a5a77a10863493a1ee8dece02578c6b32025fb3afff91b40476bc489e81648"},
{file = "pyzmq-23.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:aecd6ceaccc4b594e0092d6513ef3f1c0fa678dd89f86bb8ff1a47014b8fca35"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:415ff62ac525d9add1e3550430a09b9928d2d24a20cc4ce809e67caac41219ab"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:67975a9e1237b9ccc78f457bef17691bbdd2055a9d26e81ee914ba376846d0ce"},
{file = "pyzmq-23.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38e106b64bad744fe469dc3dd864f2764d66399178c1bf39d45294cc7980f14f"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c842109d31a9281d678f668629241c405928afbebd913c48a5a8e7aee61f63d"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:fefdf9b685fda4141b95ebec975946076a5e0723ff70b037032b2085c5317684"},
{file = "pyzmq-23.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:79a87831b47a9f6161ad23fa5e89d5469dc585abc49f90b9b07fea8905ae1234"},
{file = "pyzmq-23.2.1-cp39-cp39-win32.whl", hash = "sha256:342ca3077f47ec2ee41b9825142b614e03e026347167cbc72a59b618c4f6106c"},
{file = "pyzmq-23.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:5e05492be125dce279721d6b54fd1b956546ecc4bcdfcf8e7b4c413bc0874c10"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:07ed8aaf7ffe150af873269690cc654ffeca7491f62aae0f3821baa181f8d5fe"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ad28ddb40db8e450d7d4bf8a1d765d3f87b63b10e7e9a825a3c130c6371a8c03"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2f67b63f53c6994d601404fd1a329e6d940ac3dd1d92946a93b2b9c70df67b9f"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c890309296f53f9aa32ffcfc51d805705e1982bffd27c9692a8f1e1b8de279f4"},
{file = "pyzmq-23.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:624fd38071a817644acdae075b92a23ea0bdd126a58148288e8284d23ec361ce"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a114992a193577cb62233abf8cb2832970f9975805a64740e325d2f895e7f85a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c780acddd2934c6831ff832ecbf78a45a7b62d4eb216480f863854a8b7d54fa7"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d904f6595acfaaf99a1a61881fea068500c40374d263e5e073aa4005e5f9c28a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929d548b74c0f82f7f95b54e4a43f9e4ce2523cfb8a54d3f7141e45652304b2a"},
{file = "pyzmq-23.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f392cbea531b7142d1958c0d4a0c9c8d760dc451e5848d8dd3387804d3e3e62c"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0f09d85c45f58aa8e715b42f8b26beba68b3b63a8f7049113478aca26efbc30"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e708fbfdf4ee3107422b69ca65da1b9f056b431fc0888096a8c1d6cd908e8f"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:35e635343ff367f697d00fa1484262bb68e36bc74c9b80737eac5a1e04c4e1b1"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efb9e38b2a590282704269585de7eb33bf43dc294cad092e1b172e23d4c217e5"},
{file = "pyzmq-23.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:407f909c4e8fde62fbdad9ebd448319792258cc0550c2815567a4d9d8d9e6d18"},
{file = "pyzmq-23.2.1.tar.gz", hash = "sha256:2b381aa867ece7d0a82f30a0c7f3d4387b7cf2e0697e33efaa5bed6c5784abcd"},
]
qtconsole = [
{file = "qtconsole-5.3.1-py3-none-any.whl", hash = "sha256:d364592d7ede3257f1e17fcdbfd339c26e2cc638ca4fa4ee56a724e26ea13c81"},
{file = "qtconsole-5.3.1.tar.gz", hash = "sha256:b73723fac43938b684dcb237a88510dc7721c43a726cea8ade179a2927c0a2f3"},
]
qtpy = [
{file = "QtPy-2.2.0-py3-none-any.whl", hash = "sha256:d283cfba378b0dbe36a55b68aea8ee2f86cd6ccf06c023af25bbe705ffbb29e5"},
{file = "QtPy-2.2.0.tar.gz", hash = "sha256:d85f1b121f24a41ad26c55c446e66abdb7c528839f8c4f11f156ec4541903914"},
]
readme-renderer = [
{file = "readme_renderer-36.0-py3-none-any.whl", hash = "sha256:2c37e472ca96755caba6cc58bcbf673a5574bc033385a2ac91d85dfef2799876"},
{file = "readme_renderer-36.0.tar.gz", hash = "sha256:f71aeef9a588fcbed1f4cc001ba611370e94a0cd27c75b1140537618ec78f0a2"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
requests-toolbelt = [
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
]
rfc3986 = [
{file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
{file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
]
rich = [
{file = "rich-12.5.1-py3-none-any.whl", hash = "sha256:2eb4e6894cde1e017976d2975ac210ef515d7548bc595ba20e195fb9628acdeb"},
{file = "rich-12.5.1.tar.gz", hash = "sha256:63a5c5ce3673d3d5fbbf23cd87e11ab84b6b451436f1b7f19ec54b6bc36ed7ca"},
]
rpy2 = [
{file = "rpy2-3.5.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b553e92db1b6ffa51ea13f7b1b509816fdec2fc833ebe0b402ee81c8a56e491"},
{file = "rpy2-3.5.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:9fb8f0914458c212dd2dd231ba0b7e849ca2b473f53da48553bc7e90976a9717"},
{file = "rpy2-3.5.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:b9c8275d7720c312c387d61a38afe22ef3165053ec707a5cb3a06b7ace5bb10c"},
{file = "rpy2-3.5.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:e1fda1489dd297fb8f44d33e6042e56a4517109b5606b2ac59bed9a2b7e29850"},
{file = "rpy2-3.5.3.tar.gz", hash = "sha256:53a092d48b44f46428fb30cb3155664d6d2f7af08ebc4c45df98df4c45a42ccb"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0424d1bbbfa51d5ddaa16d067fd593863c9f2fb7c6840c32f8a08a8832f8e7a4"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:8f2232c9d9119ec356240255a715a289b3a33be828c3e4abac11fd052ce15b1e"},
{file = "scipy-1.9.0-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2004d2a3c397b26ca78e67c9d320153a1a9b71ae713ad33f4a3a3ab3d79cc65"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45f0d6c0d6e55582d3b8f5c58ad4ca4259a02affb190f89f06c8cc02e21bba81"},
{file = "scipy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79dd7876614fc2869bf5d311ef33962d2066ea888bc66c80fd4fa80f8772e5a9"},
{file = "scipy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:10417935486b320d98536d732a58362e3d37e84add98c251e070c59a6bfe0863"},
{file = "scipy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:adb6c438c6ef550e2bb83968e772b9690cb421f2c6073f9c2cb6af15ee538bc9"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:8d541db2d441ef87afb60c4a2addb00c3af281633602a4967e733ef4b7050504"},
{file = "scipy-1.9.0-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:97a1f1e51ea30782d7baa8d0c52f72c3f9f05cb609cf1b990664231c5102bccd"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:16207622570af10f9e6a2cdc7da7a9660678852477adbcd056b6d1057a036fef"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb687d245b6963673c639f318eea7e875d1ba147a67925586abed3d6f39bb7d8"},
{file = "scipy-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73b704c5eea9be811919cae4caacf3180dd9212d9aed08477c1d2ba14900a9de"},
{file = "scipy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:12005d30894e4fe7b247f7233ba0801a341f887b62e2eb99034dd6f2a8a33ad6"},
{file = "scipy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:fc58c3fcb8a724b703ffbc126afdca5a8353d4d5945d5c92db85617e165299e7"},
{file = "scipy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:01c2015e132774feefe059d5354055fec6b751d7a7d70ad2cf5ce314e7426e2a"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:f7c3c578ff556333f3890c2df6c056955d53537bb176698359088108af73a58f"},
{file = "scipy-1.9.0-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:e2ac088ea4aa61115b96b47f5f3d94b3fa29554340b6629cd2bfe6b0521ee33b"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5d1b9cf3771fd921f7213b4b886ab2606010343bb36259b544a816044576d69e"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3a326673ac5afa9ef5613a61626b9ec15c8f7222b4ecd1ce0fd8fcba7b83c59"},
{file = "scipy-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693b3fe2e7736ce0dbc72b4d933798eb6ca8ce51b8b934e3f547cc06f48b2afb"},
{file = "scipy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:7bad16b91918bf3288089a78a4157e04892ea6475fb7a1d9bcdf32c30c8a3dba"},
{file = "scipy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:bd490f77f35800d5620f4d9af669e372d9a88db1f76ef219e1609cc4ecdd1a24"},
{file = "scipy-1.9.0.tar.gz", hash = "sha256:c0dfd7d2429452e7e94904c6a3af63cbaa3cf51b348bd9d35b42db7e9ad42791"},
]
seaborn = [
{file = "seaborn-0.11.2-py3-none-any.whl", hash = "sha256:85a6baa9b55f81a0623abddc4a26b334653ff4c6b18c418361de19dbba0ef283"},
{file = "seaborn-0.11.2.tar.gz", hash = "sha256:cf45e9286d40826864be0e3c066f98536982baf701a7caa386511792d61ff4f6"},
]
secretstorage = [
{file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
{file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools-scm = [
{file = "setuptools_scm-6.4.2-py3-none-any.whl", hash = "sha256:acea13255093849de7ccb11af9e1fb8bde7067783450cee9ef7a93139bddf6d4"},
{file = "setuptools_scm-6.4.2.tar.gz", hash = "sha256:6833ac65c6ed9711a4d5d2266f8024cfa07c533a0e55f4c12f6eff280a5a9e30"},
]
shap = [
{file = "shap-0.39.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bf9af9b089ef95cb1ac0df80a43f8144aa9095d10f282cb5c19643ff88a6a79d"},
{file = "shap-0.39.0-cp37-cp37m-win_amd64.whl", hash = "sha256:b44f9fbb7349f5406b98b4ec24c672f8fe932606bb7574a8aae2238410c55289"},
{file = "shap-0.39.0-cp38-cp38-win_amd64.whl", hash = "sha256:c0d51b44c15eae1c12e51ed498f898cfc5e12d6be7e0d4f733ce6453f6ec85a4"},
{file = "shap-0.39.0.tar.gz", hash = "sha256:0196a6c12cc98f8b48ce9c5968550902432b80290da6fa7be8655441a1c6251a"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.1.1-py3-none-any.whl", hash = "sha256:309a8da80cb6da9f4713438e5b55861877d5d7976b69d87e336733637ea12693"},
{file = "Sphinx-5.1.1.tar.gz", hash = "sha256:ba3224a4e206e1fbdecf98a4fae4992ef9b24b85ebf7b584bb340156eaf08d89"},
]
sphinx-multiversion = []
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.0.0-py2.py3-none-any.whl", hash = "sha256:4d35a56f4508cfee4c4fb604373ede6feae2a306731d533f409ef5c3496fdbd8"},
{file = "sphinx_rtd_theme-1.0.0.tar.gz", hash = "sha256:eec6d497e4c2195fa0e8b2016b337532b8a699a68bcb22a512870e16925c6a5c"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
stack-data = [
{file = "stack_data-0.4.0-py3-none-any.whl", hash = "sha256:b94fed36d725cfabc6d09ed5886913e35eed9009766a1af1d5941b9da3a94aaa"},
{file = "stack_data-0.4.0.tar.gz", hash = "sha256:a90ae7e260f7d15aefeceb46f0a028d4ccb9eb8856475c53e341945342d41ea7"},
]
statsmodels = [
{file = "statsmodels-0.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3e7ca5b7e678c0bb7a24f5c735d58ac104a50eb61b17c484cce0e221a095560f"},
{file = "statsmodels-0.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:066a75d5585378b2df972f81a90b9a3da5e567b7d4833300c1597438c1a35e29"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f15f38dfc9c5c091662cb619e12322047368c67aef449c7554d9b324a15f7a94"},
{file = "statsmodels-0.13.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c4ccc6b4744613367e8a233bd952c8a838db8f528f9fe033bda25aa13fc7d08"},
{file = "statsmodels-0.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:855b1cc2a91ab140b9bcf304b1731705805ce73223bf500b988804968554c0ed"},
{file = "statsmodels-0.13.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69c9af7606325095f7c40c581957bad9f28775653d41537c1ec4cd1b185ff5b"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab31bac0f72b83bca1f217a12ec6f309a56485a50c4a705fbdd63112213d4da4"},
{file = "statsmodels-0.13.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d680b910b57fc0aa87472662cdfe09aae0e21db4bdf19ccd6420fd4dffda892"},
{file = "statsmodels-0.13.2-cp37-cp37m-win32.whl", hash = "sha256:9e9a3f661d372431850d55157d049e079493c97fc06f550d23d8c8c70805cc48"},
{file = "statsmodels-0.13.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9f6326870c095ef688f072cd476b932aff0906d60193eaa08e93ec23b29ca83"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc050f25f1ba1221efef9ea01b751c60935ad787fcd4259f4ece986f2da9141"},
{file = "statsmodels-0.13.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:426b1c8ea3918d3d27dbfa38f2bee36cabf41d32163e2cbb3adfb0178b24626a"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45b80fac4a63308b1e93fa9dc27a8598930fd5dfd77c850ca077bb850254c6d7"},
{file = "statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78ee69ec0e0f79f627245c65f8a495b8581c2ea19084aac63941815feb15dcf3"},
{file = "statsmodels-0.13.2-cp38-cp38-win32.whl", hash = "sha256:20483cc30e11aa072b30d307bb80470f86a23ae8fffa51439ca54509d7aa9b05"},
{file = "statsmodels-0.13.2-cp38-cp38-win_amd64.whl", hash = "sha256:bf43051a92231ccb9de95e4b6d22d3b15e499ee5ee9bff0a20e6b6ad293e34cb"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6bf0dfed5f5edb59b5922b295392cd276463b10a5e730f7e57ee4ff2d8e9a87e"},
{file = "statsmodels-0.13.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a403b559c5586dab7ac0fc9e754c737b017c96cce0ddd66ff9094764cdaf293d"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9f23554dd025ea354ce072ba32bfaa840d2b856372e5734290e181d27a1f9e0c"},
{file = "statsmodels-0.13.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:815f4df713e3eb6f40ae175c71f2a70d32f9219b5b4d23d4e0faab1171ba93ba"},
{file = "statsmodels-0.13.2-cp39-cp39-win32.whl", hash = "sha256:461c82ab2265fa8457b96afc23ef3ca19f42eb070436e0241b57e58a38863901"},
{file = "statsmodels-0.13.2-cp39-cp39-win_amd64.whl", hash = "sha256:39daab5a8a9332c8ea83d6464d065080c9ba65f236daf6a64aa18f64ef776fad"},
{file = "statsmodels-0.13.2.tar.gz", hash = "sha256:77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4"},
]
sympy = [
{file = "sympy-1.10.1-py3-none-any.whl", hash = "sha256:df75d738930f6fe9ebe7034e59d56698f29e85f443f743e51e47df0caccc2130"},
{file = "sympy-1.10.1.tar.gz", hash = "sha256:5939eeffdf9e152172601463626c022a2c27e75cf6278de8d401d50c9d58787b"},
]
tensorboard = [
{file = "tensorboard-2.9.1-py3-none-any.whl", hash = "sha256:baa727f791776f9e5841d347127720ceed4bbd59c36b40604b95fb2ae6029276"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.9.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:2c77edeb4a9d542032ddac93f732e3dd3d1c4d15c8795e4e43166ed46503582d"},
{file = "tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:add2f4e55fd7209a5cb01d815364adc9827184d74a83c2f1616c72be2e85c8b7"},
{file = "tensorflow-2.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:8be7d7f6091ac5af09c19eb178bbbd51346794684851a7bd57826d2d1cfa220e"},
{file = "tensorflow-2.9.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b1f7674220b19ab5d39d9313f9c60278cef56052614600cba192700c658e502c"},
{file = "tensorflow-2.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488a7a964298b92278cca593b7f687abd1c8f51982a08dc6ded5efd9457bf477"},
{file = "tensorflow-2.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:abbdcfa7c14974f9bafa8f51214c037639a8991eef3dbcafc506e608a673c54c"},
{file = "tensorflow-2.9.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:df048d216f90c9fcbda22bdd03420b0e0e6e7ee98b8b91f221afef46b7664635"},
{file = "tensorflow-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adc1b80ec32337ba1720fadbf64242aae06a7ee1c7aea6a1a0cffe1968c89b25"},
{file = "tensorflow-2.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:a27bffb91f0aef6ba2a30f6507932d9c6801af56cfee8442767f9f3f69bb37a6"},
{file = "tensorflow-2.9.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:81ad045d39cf4360d5e6183cdda0990661302cca502917a12e54177989b340b9"},
{file = "tensorflow-2.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:212a4d97fb6dedff257703d7a2ce50b0b8ec23d955938083363aa7bc526918cc"},
{file = "tensorflow-2.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c2c6557f6bd7e0d3f1063e27346e4f1c4353db2b7751ac7451033b335fa4b7c"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.9.0-py2.py3-none-any.whl", hash = "sha256:e9762bb302f51bc1eb2f35d19f0190a6a2d809d754d5def788c4328fe3746744"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:4222a9d0c0ddeca2fd2bfd70f5ed149346f5ba12ffe65d817d8e18393341d8e2"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5457eeef1f0f5f294225808b2290a251a2e4639ec66db9d32aa4ae62e807d7e8"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c71cebb26ce10e6e48dc46e6fc0acef5329b01f75a5e76c7defb77175bf97f7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-win_amd64.whl", hash = "sha256:1c165595c7a67668b44c7ffb9746ffb351c630940d9cca7f2b31f8adf7a36b94"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c20e1f95b904f43ac86fdb251f222be2c3e7026e9ddbde2a3b6a456f26a83944"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1cccdc12ec304a7ab3e6f85919ba5a77c2bf751b3d0f9e62196ee7df11a8136a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94645cac4449dd2ccc40327c23d0256cf4e96597e5a55116a91076e9dc96023e"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp37-cp37m-win_amd64.whl", hash = "sha256:ce0d7eaaebfcb5fdcff161af0e8a4b94d5dc346299111c08373d66058011a16d"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:e9569dadd79b2d4b28dbe5be47c378a884414a85c89eaeae6115bcba4f3cbb96"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:84a463e228cde296fc63672902a2eceac9fec5f8ae7605e9f18824db591e7f5c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:531214e48ef64a96f565550b283e75cf0119abff14048a11a25453b47ec5b61c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp38-cp38-win_amd64.whl", hash = "sha256:44b28c9c6a9e25774a53ec2e85ed4d0b5c4db3a7d3a4011ade94fa9ee636393c"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:09f9df13737e2b4d92b73653509281d77732ef9a90a1ebef824511ce5431eb0a"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c71438e6459f52462b95f98ab17b20cd1a269a1efe837e4df426a0b79359f3b7"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd285595afe03740553710ccdbd1397d69a8e48d758c731c0de1f1c5a71a9fe5"},
{file = "tensorflow_io_gcs_filesystem-0.26.0-cp39-cp39-win_amd64.whl", hash = "sha256:2940b4ab6848ef5ec34dc3c140b5ae9eba0da13453da839c30ebe3461a6eb51d"},
]
termcolor = [
{file = "termcolor-1.1.0.tar.gz", hash = "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b"},
]
terminado = [
{file = "terminado-0.15.0-py3-none-any.whl", hash = "sha256:0d5f126fbfdb5887b25ae7d9d07b0d716b1cc0ccaacc71c1f3c14d228e065197"},
{file = "terminado-0.15.0.tar.gz", hash = "sha256:ab4eeedccfcc1e6134bfee86106af90852c69d602884ea3a1e8ca6d4486e9bfe"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.1.1-py3-none-any.whl", hash = "sha256:fe794ceaadfe3cf3e686b22155d0da5780dd0e273471a51846d0a02bc204fec8"},
{file = "tinycss2-1.1.1.tar.gz", hash = "sha256:b2e44dd8883c360c35dd0d1b5aad0b610e5156c2cb3b33434634e539ead9d8bf"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.0-py2.py3-none-any.whl", hash = "sha256:74a2cdefe14d11442cedf3ba4e21a3b84ff9a2dbdc6cfae2c34addb2a14a5ea6"},
{file = "tqdm-4.64.0.tar.gz", hash = "sha256:40be55d30e200777a307a7585aee69e4eabb46b4ec6a4b4a5f2d9f11e7d5408d"},
]
traitlets = [
{file = "traitlets-5.3.0-py3-none-any.whl", hash = "sha256:65fa18961659635933100db8ca120ef6220555286949774b9cfc106f941d1c7a"},
{file = "traitlets-5.3.0.tar.gz", hash = "sha256:0bb9f1f9f017aa8ec187d8b1b2a7a6626a2a1d877116baba52a129bfa124f8e2"},
]
twine = [
{file = "twine-4.0.1-py3-none-any.whl", hash = "sha256:42026c18e394eac3e06693ee52010baa5313e4811d5a11050e7d48436cf41b9e"},
{file = "twine-4.0.1.tar.gz", hash = "sha256:96b1cf12f7ae611a4a40b6ae8e9570215daff0611828f5fe1f37a16255ab24a0"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]
tzdata = [
{file = "tzdata-2022.2-py2.py3-none-any.whl", hash = "sha256:c3119520447d68ef3eb8187a55a4f44fa455f30eb1b4238fa5691ba094f2b05b"},
{file = "tzdata-2022.2.tar.gz", hash = "sha256:21f4f0d7241572efa7f7a4fdabb052e61b55dc48274e6842697ccdf5253e5451"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.11-py2.py3-none-any.whl", hash = "sha256:c33ccba33c819596124764c23a97d25f32b28433ba0dedeb77d873a38722c9bc"},
{file = "urllib3-1.26.11.tar.gz", hash = "sha256:ea6e8fb210b19d950fab93b60c9009226c63a28808bc8386e05301e25883ac0a"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.2-py3-none-any.whl", hash = "sha256:966bd61443926b6adcc0abef9f499c48bdeda181c333b0f49842d7385d440579"},
{file = "widgetsnbextension-4.0.2.tar.gz", hash = "sha256:07f0e8582f920b24316cef16490f1aeb498f2c875d48980540e5c5dbf0ff5e2d"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.6.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:2b3d4ee105f8434873b40edc511330b8276bf3a8d9d42fb0319973079df30b07"},
{file = "xgboost-1.6.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:bd3e59a5490e010004106d8ea1d07aa8e048be51a0974fca6b4f00988f087ab8"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:bbf16af8bf72e8761fcf69fdb5798bd5add6ecb48049198551b13c1d7abeabb5"},
{file = "xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6207c77f611b54d9f056edede819ead03f0235615675f88030ff9fe10d359551"},
{file = "xgboost-1.6.1-py3-none-win_amd64.whl", hash = "sha256:3adcb7e4ccf774d5e0128c01e5c381303c3799910ab0f2e996160fe3cd23b7fc"},
{file = "xgboost-1.6.1.tar.gz", hash = "sha256:24072028656f3428e7b8aabf77340ece057f273e41f7f85d67ccaefb7454bb18"},
]
zipp = [
{file = "zipp-3.8.1-py3-none-any.whl", hash = "sha256:47c40d7fe183a6f21403a199b3e4192cca5774656965b0a4988ad2f8feb5f009"},
{file = "zipp-3.8.1.tar.gz", hash = "sha256:05b45f1ee8f807d0cc928485ca40a07cb491cf092ff587c0df9cb1fd154848d2"},
]
| darthtrevino | f947d98ffbc9d96b91be9bcfc43cf142fc2c0cd5 | eadb99cbe104d89dda62f73e331515df484e8d5a | It looks like it - I believe that it's generally recommended to run `poetry update` once in a while, which will update the resolved versions of your dependencies to the latest that matches the semver range, but these are all from just running `poetry add mistune` | darthtrevino | 341 |
py-why/dowhy | 602 | Add dependency to resolve security alert | https://github.com/py-why/dowhy/security/dependabot/1
* Updates nbconvert to 7.0rc3 & mistune to a safe version
* Move documentation-generation dependencies into devDependencies area
* Run `poetry update` for dependency refresh | null | 2022-08-13 01:40:09+00:00 | 2022-08-19 21:01:53+00:00 | pyproject.toml | [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# CausalML Extra (causalml is wired to use llvmlite 0.36)
#
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
#
# Dependencies for Documentation Generation
#
rpy2 = "^3.5.2"
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname"}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master"}
nbsphinx = "^0.8.9"
sphinx-rtd-theme = "^1.0.0"
pydata-sphinx-theme = "^0.9.0"
pandoc = "^2.2"
ipykernel = "^6.15.1"
#
# Versions defined for security reasons
#
# https://github.com/py-why/dowhy/security/dependabot/1 - CVE-2022-34749
nbconvert = {version = "7.0.0rc3", allow-prereleases = true}
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | f947d98ffbc9d96b91be9bcfc43cf142fc2c0cd5 | eadb99cbe104d89dda62f73e331515df484e8d5a | Is there a reason these lines related to causalml have changed together with the update to the mistune version? | emrekiciman | 342 |
py-why/dowhy | 602 | Add dependency to resolve security alert | https://github.com/py-why/dowhy/security/dependabot/1
* Updates nbconvert to 7.0rc3 & mistune to a safe version
* Move documentation-generation dependencies into devDependencies area
* Run `poetry update` for dependency refresh | null | 2022-08-13 01:40:09+00:00 | 2022-08-19 21:01:53+00:00 | pyproject.toml | [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# CausalML Extra (causalml is wired to use llvmlite 0.36)
#
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
#
# Dependencies for Documentation Generation
#
rpy2 = "^3.5.2"
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname"}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master"}
nbsphinx = "^0.8.9"
sphinx-rtd-theme = "^1.0.0"
pydata-sphinx-theme = "^0.9.0"
pandoc = "^2.2"
ipykernel = "^6.15.1"
#
# Versions defined for security reasons
#
# https://github.com/py-why/dowhy/security/dependabot/1 - CVE-2022-34749
nbconvert = {version = "7.0.0rc3", allow-prereleases = true}
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | f947d98ffbc9d96b91be9bcfc43cf142fc2c0cd5 | eadb99cbe104d89dda62f73e331515df484e8d5a | Those dependencies didn't change, but I grouped them together so it was clear that they were related. | darthtrevino | 343 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | .github/workflows/python-package.yml | # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: DoWhy CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-plotting.txt ]; then pip install -r requirements-plotting.txt; fi
pip install 'scikit-learn==1.0.2' --force-reinstall
if [ -f requirements-test.txt ]; then pip install -r requirements-test.txt --use-deprecated=legacy-resolver; fi
pip install 'numpy<=1.21' --force-reinstall
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -v -m "not advanced"
- name: Check package consistency with twine
run: |
python setup.py check sdist bdist_wheel
twine check dist/*
| # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: DoWhy CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9]
poetry-version: [1.1.14]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Poetry ${{ matrix.poetry-version }}
uses: abatilo/[email protected]
with:
poetry-version: ${{ matrix.poetry-version }}
- name: Install Poetry Dynamic Versioning Plugin
run: pip install poetry-dynamic-versioning
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
echo "installing poetry dependencies"
poetry install -E plotting -E causalml
- name: Lint
run: poetry run poe lint
- name: Test
run: poetry run poe test
- name: Check package consistency
run: |
poetry build
poetry run poe verifyPackage
| darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | Sorry, you might have already commented somewhere, but why is it necessary to remove 3.7? | petergtz | 344 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | .github/workflows/python-package.yml | # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: DoWhy CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-plotting.txt ]; then pip install -r requirements-plotting.txt; fi
pip install 'scikit-learn==1.0.2' --force-reinstall
if [ -f requirements-test.txt ]; then pip install -r requirements-test.txt --use-deprecated=legacy-resolver; fi
pip install 'numpy<=1.21' --force-reinstall
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -v -m "not advanced"
- name: Check package consistency with twine
run: |
python setup.py check sdist bdist_wheel
twine check dist/*
| # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: DoWhy CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9]
poetry-version: [1.1.14]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Poetry ${{ matrix.poetry-version }}
uses: abatilo/[email protected]
with:
poetry-version: ${{ matrix.poetry-version }}
- name: Install Poetry Dynamic Versioning Plugin
run: pip install poetry-dynamic-versioning
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
echo "installing poetry dependencies"
poetry install -E plotting -E causalml
- name: Lint
run: poetry run poe lint
- name: Test
run: poetry run poe test
- name: Check package consistency
run: |
poetry build
poetry run poe verifyPackage
| darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | I tracked my compatibility notes in pyproject.toml here: https://github.com/py-why/dowhy/blob/0bd42158d7aadcd2e4e5c5c9497b1df1c2e0967d/pyproject.toml#L40-L46
Basically, `pygraphviz` and `networkx` - at least recent versions of them - require >= 3.8. Since there's a proposal to extend networkx with causal graphs, it seemed to make sense to stay recent with them. | darthtrevino | 345 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | should we include the LICENSE file here? | amit-sharma | 346 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | curious: why do we need to convert this notebook to html? | amit-sharma | 347 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | should we put llmvlite as a dev-dependency? It is not used by dowhy, but only relevant if someone is using causalml.
Another option is to mark it as an optional dependency. | amit-sharma | 348 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | Honestly I just moved this script over from the makefile. I'm not sure if it was part of your workflow for some reason | darthtrevino | 349 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | There's a license field above, which I think covers it | darthtrevino | 350 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | I can do that; if it's optional we would probably add it as a devDep and mark it as optional in the main deps. Should causalml be optional as well? | darthtrevino | 351 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | yes, causalml should be optional too. | amit-sharma | 352 |
py-why/dowhy | 584 | Use Poetry for Managing Dependencies, Tasks, and Package Builds | This PR includes the [Poetry](https://python-poetry.org/) tool for managing dependencies and packaging the library. This reduces a lot of boilerplate involved in defining and executing sequences of common tasks and with packaging & publishing the library.
* Use pytest-xdist; other test updates will be deferred for another PR
* Add poetry tasks for lint, build, verifyPackage, etc.. The CI uses these tasks instead of libraries like flake8 and pytest individually. This is useful for local development, and will let us chain tools together easily.
* Remove pythonic build plumbing (setup.cfg, setup.py, manifest.in) - as Poetry takes over packaging.
* Remove all requirements.txt files, as Poetry takes over dependency management.
* Remove versioneer; poetry-dynamic-versioning will apply version substition
Signed-off-by: Chris Trevino <[email protected]> | null | 2022-07-29 20:10:26+00:00 | 2022-08-13 01:12:53+00:00 | pyproject.toml | [tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
| [tool.poetry]
name = "dowhy"
#
# 0.0.0 is standard placeholder for poetry-dynamic-versioning
# any changes to this should not be checked in
#
version = "0.0.0"
description = "DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions"
authors = [
"PyWhy Community <[email protected]>"
]
maintainers = []
license = "MIT"
documentation = "https://py-why.github.io/dowhy"
repository = "https://github.com/pywhy/dowhy"
classifiers = [
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
]
keywords = [
'causality',
'machine-learning',
'causal-inference',
'statistics',
'graphical-model'
]
include = ['docs', 'tests', 'CONTRIBUTING.md', 'LICENSE']
readme = 'README.rst'
[tool.poetry-dynamic-versioning]
enable = true
vcs = "git"
[tool.poetry-dynamic-versioning.substitution]
files = ["dowhy/__init__.py"]
#
# Dependency compatibility notes:
# * xgboost requires Python >=3.7,<3.11
# * pygraphviz requires Python >=3.8
# * networkx requires Python >= 3.8
# * llvmlite requires Python >=3.6,<3.11
#
[tool.poetry.dependencies]
python = ">=3.8,<3.10"
scipy = "^1.8.1"
statsmodels = "^0.13.2"
numpy = "^1.23.1"
pandas = "^1.4.3"
networkx = "^2.8.5"
sympy = "^1.10.1"
scikit-learn = "1.0.2"
pydot = "^1.4.2"
joblib = "^1.1.0"
# causalml is wired to use llvmlite 0.36
causalml = {version = "^0.12.3", optional = true}
llvmlite = {version = "^0.36.0", optional = true}
pygraphviz = {version = "^1.9", optional = true}
matplotlib = {version = "^3.5.2", optional = true}
econml = {version = "*", optional = true}
tqdm = "^4.64.0"
#
# Dependencies for Documentation Generation
#
rpy2 = {version = "^3.5.2", optional = true}
sphinx-multiversion = { git = "https://github.com/petergtz/sphinx-multiversion.git", branch = "override-version-with-refname", optional = true}
sphinxcontrib-googleanalytics = { git = "https://github.com/petergtz/googleanalytics.git", branch = "master", optional = true}
nbsphinx = {version = "^0.8.9", optional = true}
sphinx-rtd-theme = {version = "^1.0.0", optional = true}
pydata-sphinx-theme = {version = "^0.9.0", optional = true}
pandoc = {version = "^2.2", optional = true}
ipykernel = {version = "^6.15.1", optional = true}
[tool.poetry.extras]
plotting = ["matplotlib"]
causalml = ["causalml", "llvmlite"]
docs = [
"rpy2",
"sphinx-multiversion",
"sphinxcontrib-googleanalytics",
"nbsphinx",
"sphinx-rtd-theme",
"pydata-sphinx-theme",
"sympy",
"ipykernel"
]
[tool.poetry.dev-dependencies]
poethepoet = "^0.16.0"
flake8 = "^4.0.1"
black = "^22.6.0"
isort = "^5.10.1"
pytest = "^7.1.2"
twine = "^4.0.1"
nbformat = "^5.4.0"
jupyter = "^1.0.0"
flaky = "^3.7.0"
tensorflow = "^2.9.1"
keras = "^2.9.0"
xgboost = "^1.6.1"
mypy = "^0.971"
econml = "*"
pygraphviz = "^1.9"
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
markers = [
"advanced: not be to run each time. only on package updates."
]
[tool.poe.tasks]
# stop the build if there are Python syntax errors or undefined names
_flake8Errors = "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics"
_flake8Warnings = "flake8 . --count --exit-zero --statistics"
_black = 'black .'
_isort = 'isort .'
# testing tasks
test = "pytest -v -m 'not advanced' --durations=0 --durations-min=60.0"
test_focused = "pytest -v -m 'focused'"
verifyPackage = "twine check dist/*"
# TODO: replace with `poetry publish`
uploadPackage = "twine upload dist/*"
# TODO: add formatting to build process
[tool.poe.tasks.format]
sequence = ['_black', '_isort']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.lint]
sequence = ['_flake8Errors', '_flake8Warnings']
ignore_fail = 'return_non_zero'
[tool.poe.tasks.verify]
sequence = ['lint', 'test']
ignore_fail = "return_non_zero" | darthtrevino | eba2c1cc8217495797f9d5ed34cffe7ed457fe40 | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | haha, I also don't remember now why it was there. I guess we can safely remove it. Does not appear to do anything critical. | amit-sharma | 353 |
py-why/dowhy | 582 | CI: Update GH Action Trigger | The current trigger fired on 'master', which has been deprecated. | null | 2022-07-28 20:18:53+00:00 | 2022-07-29 09:18:25+00:00 | .github/workflows/python-package.yml | # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python package
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-plotting.txt ]; then pip install -r requirements-plotting.txt; fi
pip install 'scikit-learn==1.0.2' --force-reinstall
if [ -f requirements-test.txt ]; then pip install -r requirements-test.txt --use-deprecated=legacy-resolver; fi
pip install 'numpy<=1.21' --force-reinstall
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -v -m "not advanced"
- name: Check package consistency with twine
run: |
python setup.py check sdist bdist_wheel
twine check dist/*
| # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: DoWhy CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-plotting.txt ]; then pip install -r requirements-plotting.txt; fi
pip install 'scikit-learn==1.0.2' --force-reinstall
if [ -f requirements-test.txt ]; then pip install -r requirements-test.txt --use-deprecated=legacy-resolver; fi
pip install 'numpy<=1.21' --force-reinstall
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -v -m "not advanced"
- name: Check package consistency with twine
run: |
python setup.py check sdist bdist_wheel
twine check dist/*
| darthtrevino | 96fa21848c4edcaa5e8c57a5d4c5efcbe39c408d | 0464798b03b6d19e81b8db71d4451ab30d20eac9 | You've removed the restriction to a specific branch (now `main`). What's the idea behind it? Do you think that's more appropriate? Asking, because I'm not sure we'd want to run this on every push to an arbitrary branch. | petergtz | 354 |
py-why/dowhy | 582 | CI: Update GH Action Trigger | The current trigger fired on 'master', which has been deprecated. | null | 2022-07-28 20:18:53+00:00 | 2022-07-29 09:18:25+00:00 | .github/workflows/python-package.yml | # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python package
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-plotting.txt ]; then pip install -r requirements-plotting.txt; fi
pip install 'scikit-learn==1.0.2' --force-reinstall
if [ -f requirements-test.txt ]; then pip install -r requirements-test.txt --use-deprecated=legacy-resolver; fi
pip install 'numpy<=1.21' --force-reinstall
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -v -m "not advanced"
- name: Check package consistency with twine
run: |
python setup.py check sdist bdist_wheel
twine check dist/*
| # This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: DoWhy CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install graphviz
run: |
sudo apt install graphviz libgraphviz-dev graphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-plotting.txt ]; then pip install -r requirements-plotting.txt; fi
pip install 'scikit-learn==1.0.2' --force-reinstall
if [ -f requirements-test.txt ]; then pip install -r requirements-test.txt --use-deprecated=legacy-resolver; fi
pip install 'numpy<=1.21' --force-reinstall
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -v -m "not advanced"
- name: Check package consistency with twine
run: |
python setup.py check sdist bdist_wheel
twine check dist/*
| darthtrevino | 96fa21848c4edcaa5e8c57a5d4c5efcbe39c408d | 0464798b03b6d19e81b8db71d4451ab30d20eac9 | That's fair - we use this terser form on some other open-source projects but I don't mind restoring the branch selector. | darthtrevino | 355 |
py-why/dowhy | 568 | Introduce PyData Sphinx theme and restructure docs accordingly | The PyData theme is being used by NumPy, Pandas, NetworkX, and others. I've re-arranged some sections to make it play nicely with the theme. The result can be seen here:
https://petergtz.github.io/dowhy/master
There is one open question, where I'd be curious for input: older versions of the documentation will now get messed up, e.g. see https://petergtz.github.io/dowhy/v0.7.1/. This is because they were structured for the RTD theme and it doesn't play well. Ideally, old documentation would keep the old theme, but there's no out-of-the-box solution using sphinx-multiversion. Curious if someone has ideas. | null | 2022-07-21 07:48:55+00:00 | 2022-08-03 12:56:58+00:00 | docs/source/conf.py | # -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/stable/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath("../../"))
# -- Project information -----------------------------------------------------
project = 'DoWhy'
copyright = '2022, PyWhy contributors'
author = 'PyWhy community'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'nbsphinx',
'sphinx_rtd_theme',
"sphinx_multiversion",
'sphinxcontrib.googleanalytics',
]
googleanalytics_id = 'G-B139P18WHM'
autodoc_mock_imports = ['matplotlib', 'causalml', 'pymc3', 'econml']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path .
exclude_patterns = [
'_build', 'Thumbs.db', '.DS_Store', '.ipynb_checkpoints',
'example_notebooks/dowhy_ranking_methods.ipynb',
'example_notebooks/dowhy_optimize_backdoor_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_causal_discovery_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_twins_example.ipynb',
]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "pydata_sphinx_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
html_theme_options = {
"navbar_end": ["navbar-icon-links", "versions"]
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'dowhydoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'dowhy.tex', 'dowhy Documentation',
'Author', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'dowhy', 'dowhy Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'dowhy', 'dowhy Documentation',
author, 'dowhy', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# init docstrings should also be included in class
autoclass_content = "both"
smv_tag_whitelist = r'^v\d*\.(9|\d{2,})(\..*)?$'
smv_branch_whitelist = "main"
smv_released_pattern = r"refs/tags/v.*"
| # -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/stable/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath("../../"))
# -- Project information -----------------------------------------------------
project = 'DoWhy'
copyright = '2022, PyWhy contributors'
author = 'PyWhy community'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'nbsphinx',
'sphinx_rtd_theme',
"sphinx_multiversion",
'sphinxcontrib.googleanalytics',
]
googleanalytics_id = 'G-B139P18WHM'
autodoc_mock_imports = ['matplotlib', 'causalml', 'pymc3', 'econml']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path .
exclude_patterns = [
'_build', 'Thumbs.db', '.DS_Store', '.ipynb_checkpoints',
'example_notebooks/dowhy_ranking_methods.ipynb',
'example_notebooks/dowhy_optimize_backdoor_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_causal_discovery_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_twins_example.ipynb',
]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "pydata_sphinx_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
html_theme_options = {
"navbar_end": ["navbar-icon-links", "versions"]
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'dowhydoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'dowhy.tex', 'dowhy Documentation',
'Author', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'dowhy', 'dowhy Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'dowhy', 'dowhy Documentation',
author, 'dowhy', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# init docstrings should also be included in class
autoclass_content = "both"
smv_tag_whitelist = r'^v\d*\.(9|\d{2,})(\..*)?$'
smv_branch_whitelist = "main"
smv_released_pattern = r"refs/tags/v.*"
| petergtz | 3db32b638414aea85ee3518afbc92b655db54a95 | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | will be good to remove/comment this out. | amit-sharma | 356 |
py-why/dowhy | 568 | Introduce PyData Sphinx theme and restructure docs accordingly | The PyData theme is being used by NumPy, Pandas, NetworkX, and others. I've re-arranged some sections to make it play nicely with the theme. The result can be seen here:
https://petergtz.github.io/dowhy/master
There is one open question, where I'd be curious for input: older versions of the documentation will now get messed up, e.g. see https://petergtz.github.io/dowhy/v0.7.1/. This is because they were structured for the RTD theme and it doesn't play well. Ideally, old documentation would keep the old theme, but there's no out-of-the-box solution using sphinx-multiversion. Curious if someone has ideas. | null | 2022-07-21 07:48:55+00:00 | 2022-08-03 12:56:58+00:00 | docs/source/conf.py | # -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/stable/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath("../../"))
# -- Project information -----------------------------------------------------
project = 'DoWhy'
copyright = '2022, PyWhy contributors'
author = 'PyWhy community'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'nbsphinx',
'sphinx_rtd_theme',
"sphinx_multiversion",
'sphinxcontrib.googleanalytics',
]
googleanalytics_id = 'G-B139P18WHM'
autodoc_mock_imports = ['matplotlib', 'causalml', 'pymc3', 'econml']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path .
exclude_patterns = [
'_build', 'Thumbs.db', '.DS_Store', '.ipynb_checkpoints',
'example_notebooks/dowhy_ranking_methods.ipynb',
'example_notebooks/dowhy_optimize_backdoor_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_causal_discovery_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_twins_example.ipynb',
]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "pydata_sphinx_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
html_theme_options = {
"navbar_end": ["navbar-icon-links", "versions"]
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'dowhydoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'dowhy.tex', 'dowhy Documentation',
'Author', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'dowhy', 'dowhy Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'dowhy', 'dowhy Documentation',
author, 'dowhy', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# init docstrings should also be included in class
autoclass_content = "both"
smv_tag_whitelist = r'^v\d*\.(9|\d{2,})(\..*)?$'
smv_branch_whitelist = "main"
smv_released_pattern = r"refs/tags/v.*"
| # -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/stable/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath("../../"))
# -- Project information -----------------------------------------------------
project = 'DoWhy'
copyright = '2022, PyWhy contributors'
author = 'PyWhy community'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'nbsphinx',
'sphinx_rtd_theme',
"sphinx_multiversion",
'sphinxcontrib.googleanalytics',
]
googleanalytics_id = 'G-B139P18WHM'
autodoc_mock_imports = ['matplotlib', 'causalml', 'pymc3', 'econml']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path .
exclude_patterns = [
'_build', 'Thumbs.db', '.DS_Store', '.ipynb_checkpoints',
'example_notebooks/dowhy_ranking_methods.ipynb',
'example_notebooks/dowhy_optimize_backdoor_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_causal_discovery_example.ipynb', # need to check why str_to_dot fails here
'example_notebooks/dowhy_twins_example.ipynb',
]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "pydata_sphinx_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
html_theme_options = {
"navbar_end": ["navbar-icon-links", "versions"]
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'dowhydoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'dowhy.tex', 'dowhy Documentation',
'Author', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'dowhy', 'dowhy Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'dowhy', 'dowhy Documentation',
author, 'dowhy', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# init docstrings should also be included in class
autoclass_content = "both"
smv_tag_whitelist = r'^v\d*\.(9|\d{2,})(\..*)?$'
smv_branch_whitelist = "main"
smv_released_pattern = r"refs/tags/v.*"
| petergtz | 3db32b638414aea85ee3518afbc92b655db54a95 | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | 🤦 Removed | petergtz | 357 |
py-why/dowhy | 568 | Introduce PyData Sphinx theme and restructure docs accordingly | The PyData theme is being used by NumPy, Pandas, NetworkX, and others. I've re-arranged some sections to make it play nicely with the theme. The result can be seen here:
https://petergtz.github.io/dowhy/master
There is one open question, where I'd be curious for input: older versions of the documentation will now get messed up, e.g. see https://petergtz.github.io/dowhy/v0.7.1/. This is because they were structured for the RTD theme and it doesn't play well. Ideally, old documentation would keep the old theme, but there's no out-of-the-box solution using sphinx-multiversion. Curious if someone has ideas. | null | 2022-07-21 07:48:55+00:00 | 2022-08-03 12:56:58+00:00 | docs/source/index.rst | DoWhy documentation
===================
.. toctree::
:maxdepth: 3
:hidden:
:glob:
getting_started/index
User Guide <user_guide/index>
Examples <example_notebooks/nb_index>
dowhy
Contributing <contributing>
code_repo
.. image:: https://raw.githubusercontent.com/py-why/dowhy/master/docs/images/dowhy-schematic.png
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark
causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on
explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its
state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making
inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for
backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect
(CATE) through an integration with the EconML library.
Getting started
---------------
New to DoWhy? Our :doc:`getting_started/index` guide will get you up to speed in minutes. Once completed, you'll be
ready to check out our :doc:`example_notebooks/nb_index`, :doc:`user_guide/index`, and other sections.
User Guide
----------
Complete newbie when it comes to causal inference and DoWhy? Then you probably want to read our
comprehensive :doc:`user_guide/index`. It guides you through everything you need to know, including the concepts and
science you need to know when trying to solve non-trivial problems.
Examples
--------
If you prefer to learn by example, we recommend our :doc:`example_notebooks/nb_index`.
API Reference
-------------
The :doc:`dowhy` guide contains a detailed description of the functions, modules, and objects included in DoWhy.
The reference describes how the methods work and which parameters can be used. It assumes that you have an
understanding of the key concepts.
Contributing
------------
Want to add to the codebase or documentation? Check out our :doc:`contributing` guide.
.. include:: cite.rst
| DoWhy documentation
===================
.. toctree::
:maxdepth: 3
:hidden:
:glob:
getting_started/index
User Guide <user_guide/index>
Examples <example_notebooks/nb_index>
dowhy
Contributing <contributing>
code_repo
.. image:: https://raw.githubusercontent.com/py-why/dowhy/master/docs/images/dowhy-schematic.png
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark
causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on
explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its
state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making
inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for
backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect
(CATE) through an integration with the EconML library.
Getting started
---------------
New to DoWhy? Our :doc:`getting_started/index` guide will get you up to speed in minutes. Once completed, you'll be
ready to check out our :doc:`example_notebooks/nb_index`, :doc:`user_guide/index`, and other sections.
User Guide
----------
Complete newbie when it comes to causal inference and DoWhy? Then you probably want to read our
comprehensive :doc:`user_guide/index`. It guides you through everything you need to know, including the concepts and
science you need to know when trying to solve non-trivial problems.
Examples
--------
If you prefer to learn by example, we recommend our :doc:`example_notebooks/nb_index`.
API Reference
-------------
The :doc:`dowhy` guide contains a detailed description of the functions, modules, and objects included in DoWhy.
The reference describes how the methods work and which parameters can be used. It assumes that you have an
understanding of the key concepts.
Contributing
------------
Want to add to the codebase or documentation? Check out our :doc:`contributing` guide.
.. include:: cite.rst
| petergtz | 3db32b638414aea85ee3518afbc92b655db54a95 | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | TBD text can be removed? | amit-sharma | 358 |
py-why/dowhy | 568 | Introduce PyData Sphinx theme and restructure docs accordingly | The PyData theme is being used by NumPy, Pandas, NetworkX, and others. I've re-arranged some sections to make it play nicely with the theme. The result can be seen here:
https://petergtz.github.io/dowhy/master
There is one open question, where I'd be curious for input: older versions of the documentation will now get messed up, e.g. see https://petergtz.github.io/dowhy/v0.7.1/. This is because they were structured for the RTD theme and it doesn't play well. Ideally, old documentation would keep the old theme, but there's no out-of-the-box solution using sphinx-multiversion. Curious if someone has ideas. | null | 2022-07-21 07:48:55+00:00 | 2022-08-03 12:56:58+00:00 | docs/source/index.rst | DoWhy documentation
===================
.. toctree::
:maxdepth: 3
:hidden:
:glob:
getting_started/index
User Guide <user_guide/index>
Examples <example_notebooks/nb_index>
dowhy
Contributing <contributing>
code_repo
.. image:: https://raw.githubusercontent.com/py-why/dowhy/master/docs/images/dowhy-schematic.png
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark
causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on
explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its
state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making
inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for
backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect
(CATE) through an integration with the EconML library.
Getting started
---------------
New to DoWhy? Our :doc:`getting_started/index` guide will get you up to speed in minutes. Once completed, you'll be
ready to check out our :doc:`example_notebooks/nb_index`, :doc:`user_guide/index`, and other sections.
User Guide
----------
Complete newbie when it comes to causal inference and DoWhy? Then you probably want to read our
comprehensive :doc:`user_guide/index`. It guides you through everything you need to know, including the concepts and
science you need to know when trying to solve non-trivial problems.
Examples
--------
If you prefer to learn by example, we recommend our :doc:`example_notebooks/nb_index`.
API Reference
-------------
The :doc:`dowhy` guide contains a detailed description of the functions, modules, and objects included in DoWhy.
The reference describes how the methods work and which parameters can be used. It assumes that you have an
understanding of the key concepts.
Contributing
------------
Want to add to the codebase or documentation? Check out our :doc:`contributing` guide.
.. include:: cite.rst
| DoWhy documentation
===================
.. toctree::
:maxdepth: 3
:hidden:
:glob:
getting_started/index
User Guide <user_guide/index>
Examples <example_notebooks/nb_index>
dowhy
Contributing <contributing>
code_repo
.. image:: https://raw.githubusercontent.com/py-why/dowhy/master/docs/images/dowhy-schematic.png
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark
causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on
explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its
state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making
inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for
backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect
(CATE) through an integration with the EconML library.
Getting started
---------------
New to DoWhy? Our :doc:`getting_started/index` guide will get you up to speed in minutes. Once completed, you'll be
ready to check out our :doc:`example_notebooks/nb_index`, :doc:`user_guide/index`, and other sections.
User Guide
----------
Complete newbie when it comes to causal inference and DoWhy? Then you probably want to read our
comprehensive :doc:`user_guide/index`. It guides you through everything you need to know, including the concepts and
science you need to know when trying to solve non-trivial problems.
Examples
--------
If you prefer to learn by example, we recommend our :doc:`example_notebooks/nb_index`.
API Reference
-------------
The :doc:`dowhy` guide contains a detailed description of the functions, modules, and objects included in DoWhy.
The reference describes how the methods work and which parameters can be used. It assumes that you have an
understanding of the key concepts.
Contributing
------------
Want to add to the codebase or documentation? Check out our :doc:`contributing` guide.
.. include:: cite.rst
| petergtz | 3db32b638414aea85ee3518afbc92b655db54a95 | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | Well, I planned to put some text around it as in the other sections. | petergtz | 359 |
py-why/dowhy | 568 | Introduce PyData Sphinx theme and restructure docs accordingly | The PyData theme is being used by NumPy, Pandas, NetworkX, and others. I've re-arranged some sections to make it play nicely with the theme. The result can be seen here:
https://petergtz.github.io/dowhy/master
There is one open question, where I'd be curious for input: older versions of the documentation will now get messed up, e.g. see https://petergtz.github.io/dowhy/v0.7.1/. This is because they were structured for the RTD theme and it doesn't play well. Ideally, old documentation would keep the old theme, but there's no out-of-the-box solution using sphinx-multiversion. Curious if someone has ideas. | null | 2022-07-21 07:48:55+00:00 | 2022-08-03 12:56:58+00:00 | docs/source/index.rst | DoWhy documentation
===================
.. toctree::
:maxdepth: 3
:hidden:
:glob:
getting_started/index
User Guide <user_guide/index>
Examples <example_notebooks/nb_index>
dowhy
Contributing <contributing>
code_repo
.. image:: https://raw.githubusercontent.com/py-why/dowhy/master/docs/images/dowhy-schematic.png
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark
causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on
explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its
state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making
inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for
backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect
(CATE) through an integration with the EconML library.
Getting started
---------------
New to DoWhy? Our :doc:`getting_started/index` guide will get you up to speed in minutes. Once completed, you'll be
ready to check out our :doc:`example_notebooks/nb_index`, :doc:`user_guide/index`, and other sections.
User Guide
----------
Complete newbie when it comes to causal inference and DoWhy? Then you probably want to read our
comprehensive :doc:`user_guide/index`. It guides you through everything you need to know, including the concepts and
science you need to know when trying to solve non-trivial problems.
Examples
--------
If you prefer to learn by example, we recommend our :doc:`example_notebooks/nb_index`.
API Reference
-------------
The :doc:`dowhy` guide contains a detailed description of the functions, modules, and objects included in DoWhy.
The reference describes how the methods work and which parameters can be used. It assumes that you have an
understanding of the key concepts.
Contributing
------------
Want to add to the codebase or documentation? Check out our :doc:`contributing` guide.
.. include:: cite.rst
| DoWhy documentation
===================
.. toctree::
:maxdepth: 3
:hidden:
:glob:
getting_started/index
User Guide <user_guide/index>
Examples <example_notebooks/nb_index>
dowhy
Contributing <contributing>
code_repo
.. image:: https://raw.githubusercontent.com/py-why/dowhy/master/docs/images/dowhy-schematic.png
Much like machine learning libraries have done for prediction, **"DoWhy" is a Python library that aims to spark
causal thinking and analysis**. DoWhy provides a principled four-step interface for causal inference that focuses on
explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its
state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making
inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for
backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect
(CATE) through an integration with the EconML library.
Getting started
---------------
New to DoWhy? Our :doc:`getting_started/index` guide will get you up to speed in minutes. Once completed, you'll be
ready to check out our :doc:`example_notebooks/nb_index`, :doc:`user_guide/index`, and other sections.
User Guide
----------
Complete newbie when it comes to causal inference and DoWhy? Then you probably want to read our
comprehensive :doc:`user_guide/index`. It guides you through everything you need to know, including the concepts and
science you need to know when trying to solve non-trivial problems.
Examples
--------
If you prefer to learn by example, we recommend our :doc:`example_notebooks/nb_index`.
API Reference
-------------
The :doc:`dowhy` guide contains a detailed description of the functions, modules, and objects included in DoWhy.
The reference describes how the methods work and which parameters can be used. It assumes that you have an
understanding of the key concepts.
Contributing
------------
Want to add to the codebase or documentation? Check out our :doc:`contributing` guide.
.. include:: cite.rst
| petergtz | 3db32b638414aea85ee3518afbc92b655db54a95 | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | Added a little text around it. | petergtz | 360 |
py-why/dowhy | 567 | fixed warnings and progrss bar improvements | Fixed the warnings arising in propensity score estimators and added optional progress bars for refuters
Signed-off-by: Amey Varhade <[email protected]> | null | 2022-07-20 18:03:07+00:00 | 2022-08-19 03:36:42+00:00 | docs/source/example_notebooks/dowhy_simple_example.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting started with DoWhy: A simple example\n",
"This is a quick introduction to the DoWhy causal inference library.\n",
"We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.\n",
"\n",
"First, let us load all required packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"from dowhy import CausalModel\n",
"import dowhy.datasets \n",
"\n",
"# Avoid printing dataconversion warnings from sklearn and numpy\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"warnings.filterwarnings(action='ignore', category=DataConversionWarning)\n",
"warnings.filterwarnings(action='ignore', category=FutureWarning)\n",
"\n",
"# Config dict to set the logging level\n",
"import logging\n",
"import logging.config\n",
"DEFAULT_LOGGING = {\n",
" 'version': 1,\n",
" 'disable_existing_loggers': False,\n",
" 'loggers': {\n",
" '': {\n",
" 'level': 'WARN',\n",
" },\n",
" }\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"logging.info(\"Getting started with DoWhy. Running notebook...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. \n",
"\n",
"Beta is the true causal effect. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"data = dowhy.datasets.linear_dataset(beta=10,\n",
" num_common_causes=5,\n",
" num_instruments = 2,\n",
" num_effect_modifiers=1,\n",
" num_samples=5000, \n",
" treatment_is_binary=True,\n",
" stddev_treatment_noise=10,\n",
" num_discrete_common_causes=1)\n",
"df = data[\"df\"]\n",
"print(df.head())\n",
"print(data[\"dot_graph\"])\n",
"print(\"\\n\")\n",
"print(data[\"gml_graph\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 1 (recommended): Input causal graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.\n",
"\n",
"To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html#) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# With graph\n",
"model=CausalModel(\n",
" data = df,\n",
" treatment=data[\"treatment_name\"],\n",
" outcome=data[\"outcome_name\"],\n",
" graph=data[\"gml_graph\"]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify \n",
"the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### DoWhy philosophy: Keep identification and estimation separate\n",
"\n",
"Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.\n",
"It is important to understand that these are orthogonal steps.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the parameter flag *proceed\\_when\\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"causal_estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\")\n",
"print(causal_estimate)\n",
"print(\"Causal Estimate is \" + str(causal_estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the \"target_units\" parameter which can be a string (\"ate\", \"att\", or \"atc\"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify \"effect modifiers\" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Causal effect on the control group (ATC)\n",
"causal_estimate_att = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\",\n",
" target_units = \"atc\")\n",
"print(causal_estimate_att)\n",
"print(\"Causal Estimate is \" + str(causal_estimate_att.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 2: Specify common causes and instruments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Without graph \n",
"model= CausalModel( \n",
" data=df, \n",
" treatment=data[\"treatment_name\"], \n",
" outcome=data[\"outcome_name\"], \n",
" common_causes=data[\"common_causes_names\"],\n",
" effect_modifiers=data[\"effect_modifier_names\"]) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We get the same causal graph. Now identification and estimation is done as before.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\") \n",
"print(estimate)\n",
"print(\"Causal Estimate is \" + str(estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refuting the estimate\n",
"\n",
"Let us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. \n",
"\n",
"Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either \n",
" 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; \n",
" \n",
" a) Random Common Cause\n",
" \n",
" b) Data Subset\n",
" \n",
" \n",
" 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test.\n",
" \n",
" a) Placebo Treatment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding a random common cause variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_random=model.refute_estimate(identified_estimand, estimate, method_name=\"random_common_cause\")\n",
"print(res_random)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Replacing treatment with a random (placebo) variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_placebo=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"placebo_treatment_refuter\", placebo_type=\"permute\")\n",
"print(res_placebo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Removing a random subset of the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", subset_fraction=0.9)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the propensity score stratification estimator is reasonably robust to refutations.\n",
"\n",
"**Reproducability**: For reproducibility, you can add a parameter \"random_seed\" to any refutation method, as shown below.\n",
"\n",
"**Parallelization**: You can also use built-in parallelization to speed up the refutation process. Simply set `n_jobs` to a value greater than 1 to spread the workload to multiple CPUs, or set `n_jobs=-1` to use all CPUs. Currently, this is available only for `random_common_cause`, `placebo_treatment_refuter`, and `data_subset_refuter`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", subset_fraction=0.9, random_seed = 1, n_jobs=-1, verbose=10)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding an unobserved common cause variable\n",
"\n",
"This refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. \n",
"\n",
"To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.\n",
"\n",
"Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.\n",
"\n",
"**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02)\n",
"print(res_unobserved)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01)\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.\n",
"\n",
"We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], \n",
" effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02])\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\")\n",
"print(res_unobserved_auto)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Conclusion**: Assuming that the unobserved confounder does not affect the treatment or outcome more strongly than any observed confounder, the causal effect can be concluded to be positive."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.7.10 ('venvrl')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting started with DoWhy: A simple example\n",
"This is a quick introduction to the DoWhy causal inference library.\n",
"We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.\n",
"\n",
"First, let us load all required packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"from dowhy import CausalModel\n",
"import dowhy.datasets \n",
"\n",
"# Avoid printing dataconversion warnings from sklearn and numpy\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"warnings.filterwarnings(action='ignore', category=DataConversionWarning)\n",
"warnings.filterwarnings(action='ignore', category=FutureWarning)\n",
"\n",
"# Config dict to set the logging level\n",
"import logging\n",
"import logging.config\n",
"DEFAULT_LOGGING = {\n",
" 'version': 1,\n",
" 'disable_existing_loggers': False,\n",
" 'loggers': {\n",
" '': {\n",
" 'level': 'WARN',\n",
" },\n",
" }\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"logging.info(\"Getting started with DoWhy. Running notebook...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. \n",
"\n",
"Beta is the true causal effect. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"data = dowhy.datasets.linear_dataset(beta=10,\n",
" num_common_causes=5,\n",
" num_instruments = 2,\n",
" num_effect_modifiers=1,\n",
" num_samples=5000, \n",
" treatment_is_binary=True,\n",
" stddev_treatment_noise=10,\n",
" num_discrete_common_causes=1)\n",
"df = data[\"df\"]\n",
"print(df.head())\n",
"print(data[\"dot_graph\"])\n",
"print(\"\\n\")\n",
"print(data[\"gml_graph\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 1 (recommended): Input causal graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.\n",
"\n",
"To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html#) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# With graph\n",
"model=CausalModel(\n",
" data = df,\n",
" treatment=data[\"treatment_name\"],\n",
" outcome=data[\"outcome_name\"],\n",
" graph=data[\"gml_graph\"]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify \n",
"the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### DoWhy philosophy: Keep identification and estimation separate\n",
"\n",
"Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.\n",
"It is important to understand that these are orthogonal steps.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the parameter flag *proceed\\_when\\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"causal_estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\")\n",
"print(causal_estimate)\n",
"print(\"Causal Estimate is \" + str(causal_estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the \"target_units\" parameter which can be a string (\"ate\", \"att\", or \"atc\"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify \"effect modifiers\" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Causal effect on the control group (ATC)\n",
"causal_estimate_att = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\",\n",
" target_units = \"atc\")\n",
"print(causal_estimate_att)\n",
"print(\"Causal Estimate is \" + str(causal_estimate_att.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 2: Specify common causes and instruments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Without graph \n",
"model= CausalModel( \n",
" data=df, \n",
" treatment=data[\"treatment_name\"], \n",
" outcome=data[\"outcome_name\"], \n",
" common_causes=data[\"common_causes_names\"],\n",
" effect_modifiers=data[\"effect_modifier_names\"]) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We get the same causal graph. Now identification and estimation is done as before.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\") \n",
"print(estimate)\n",
"print(\"Causal Estimate is \" + str(estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refuting the estimate\n",
"\n",
"Let us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. \n",
"\n",
"Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either \n",
" 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; \n",
" \n",
" a) Random Common Cause\n",
" \n",
" b) Data Subset\n",
" \n",
" \n",
" 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test.\n",
" \n",
" a) Placebo Treatment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding a random common cause variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_random=model.refute_estimate(identified_estimand, estimate, method_name=\"random_common_cause\", show_progress_bar=True)\n",
"print(res_random)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Replacing treatment with a random (placebo) variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_placebo=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"placebo_treatment_refuter\", show_progress_bar=True, placebo_type=\"permute\")\n",
"print(res_placebo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Removing a random subset of the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", show_progress_bar=True, subset_fraction=0.9)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the propensity score stratification estimator is reasonably robust to refutations.\n",
"\n",
"**Reproducability**: For reproducibility, you can add a parameter \"random_seed\" to any refutation method, as shown below.\n",
"\n",
"**Parallelization**: You can also use built-in parallelization to speed up the refutation process. Simply set `n_jobs` to a value greater than 1 to spread the workload to multiple CPUs, or set `n_jobs=-1` to use all CPUs. Currently, this is available only for `random_common_cause`, `placebo_treatment_refuter`, and `data_subset_refuter`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", show_progress_bar=True, subset_fraction=0.9, random_seed = 1, n_jobs=-1, verbose=10)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding an unobserved common cause variable\n",
"\n",
"This refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. \n",
"\n",
"To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.\n",
"\n",
"Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.\n",
"\n",
"**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02)\n",
"print(res_unobserved)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01)\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.\n",
"\n",
"We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], \n",
" effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02])\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\")\n",
"print(res_unobserved_auto)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Conclusion**: Assuming that the unobserved confounder does not affect the treatment or outcome more strongly than any observed confounder, the causal effect can be concluded to be positive."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| yemaedahrav | a38a03f8b2daa0627833821e92c1b9117925e331 | f94638633ef23064c69c261c324cc4739d036ac0 | good to remove the vscode metadata. (you can just delete it) | amit-sharma | 361 |
py-why/dowhy | 567 | fixed warnings and progrss bar improvements | Fixed the warnings arising in propensity score estimators and added optional progress bars for refuters
Signed-off-by: Amey Varhade <[email protected]> | null | 2022-07-20 18:03:07+00:00 | 2022-08-19 03:36:42+00:00 | docs/source/example_notebooks/dowhy_simple_example.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting started with DoWhy: A simple example\n",
"This is a quick introduction to the DoWhy causal inference library.\n",
"We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.\n",
"\n",
"First, let us load all required packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"from dowhy import CausalModel\n",
"import dowhy.datasets \n",
"\n",
"# Avoid printing dataconversion warnings from sklearn and numpy\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"warnings.filterwarnings(action='ignore', category=DataConversionWarning)\n",
"warnings.filterwarnings(action='ignore', category=FutureWarning)\n",
"\n",
"# Config dict to set the logging level\n",
"import logging\n",
"import logging.config\n",
"DEFAULT_LOGGING = {\n",
" 'version': 1,\n",
" 'disable_existing_loggers': False,\n",
" 'loggers': {\n",
" '': {\n",
" 'level': 'WARN',\n",
" },\n",
" }\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"logging.info(\"Getting started with DoWhy. Running notebook...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. \n",
"\n",
"Beta is the true causal effect. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"data = dowhy.datasets.linear_dataset(beta=10,\n",
" num_common_causes=5,\n",
" num_instruments = 2,\n",
" num_effect_modifiers=1,\n",
" num_samples=5000, \n",
" treatment_is_binary=True,\n",
" stddev_treatment_noise=10,\n",
" num_discrete_common_causes=1)\n",
"df = data[\"df\"]\n",
"print(df.head())\n",
"print(data[\"dot_graph\"])\n",
"print(\"\\n\")\n",
"print(data[\"gml_graph\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 1 (recommended): Input causal graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.\n",
"\n",
"To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html#) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# With graph\n",
"model=CausalModel(\n",
" data = df,\n",
" treatment=data[\"treatment_name\"],\n",
" outcome=data[\"outcome_name\"],\n",
" graph=data[\"gml_graph\"]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify \n",
"the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### DoWhy philosophy: Keep identification and estimation separate\n",
"\n",
"Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.\n",
"It is important to understand that these are orthogonal steps.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the parameter flag *proceed\\_when\\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"causal_estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\")\n",
"print(causal_estimate)\n",
"print(\"Causal Estimate is \" + str(causal_estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the \"target_units\" parameter which can be a string (\"ate\", \"att\", or \"atc\"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify \"effect modifiers\" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Causal effect on the control group (ATC)\n",
"causal_estimate_att = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\",\n",
" target_units = \"atc\")\n",
"print(causal_estimate_att)\n",
"print(\"Causal Estimate is \" + str(causal_estimate_att.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 2: Specify common causes and instruments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Without graph \n",
"model= CausalModel( \n",
" data=df, \n",
" treatment=data[\"treatment_name\"], \n",
" outcome=data[\"outcome_name\"], \n",
" common_causes=data[\"common_causes_names\"],\n",
" effect_modifiers=data[\"effect_modifier_names\"]) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We get the same causal graph. Now identification and estimation is done as before.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\") \n",
"print(estimate)\n",
"print(\"Causal Estimate is \" + str(estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refuting the estimate\n",
"\n",
"Let us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. \n",
"\n",
"Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either \n",
" 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; \n",
" \n",
" a) Random Common Cause\n",
" \n",
" b) Data Subset\n",
" \n",
" \n",
" 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test.\n",
" \n",
" a) Placebo Treatment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding a random common cause variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_random=model.refute_estimate(identified_estimand, estimate, method_name=\"random_common_cause\")\n",
"print(res_random)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Replacing treatment with a random (placebo) variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_placebo=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"placebo_treatment_refuter\", placebo_type=\"permute\")\n",
"print(res_placebo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Removing a random subset of the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", subset_fraction=0.9)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the propensity score stratification estimator is reasonably robust to refutations.\n",
"\n",
"**Reproducability**: For reproducibility, you can add a parameter \"random_seed\" to any refutation method, as shown below.\n",
"\n",
"**Parallelization**: You can also use built-in parallelization to speed up the refutation process. Simply set `n_jobs` to a value greater than 1 to spread the workload to multiple CPUs, or set `n_jobs=-1` to use all CPUs. Currently, this is available only for `random_common_cause`, `placebo_treatment_refuter`, and `data_subset_refuter`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", subset_fraction=0.9, random_seed = 1, n_jobs=-1, verbose=10)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding an unobserved common cause variable\n",
"\n",
"This refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. \n",
"\n",
"To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.\n",
"\n",
"Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.\n",
"\n",
"**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02)\n",
"print(res_unobserved)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01)\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.\n",
"\n",
"We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], \n",
" effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02])\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\")\n",
"print(res_unobserved_auto)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Conclusion**: Assuming that the unobserved confounder does not affect the treatment or outcome more strongly than any observed confounder, the causal effect can be concluded to be positive."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.7.10 ('venvrl')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting started with DoWhy: A simple example\n",
"This is a quick introduction to the DoWhy causal inference library.\n",
"We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.\n",
"\n",
"First, let us load all required packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"from dowhy import CausalModel\n",
"import dowhy.datasets \n",
"\n",
"# Avoid printing dataconversion warnings from sklearn and numpy\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"warnings.filterwarnings(action='ignore', category=DataConversionWarning)\n",
"warnings.filterwarnings(action='ignore', category=FutureWarning)\n",
"\n",
"# Config dict to set the logging level\n",
"import logging\n",
"import logging.config\n",
"DEFAULT_LOGGING = {\n",
" 'version': 1,\n",
" 'disable_existing_loggers': False,\n",
" 'loggers': {\n",
" '': {\n",
" 'level': 'WARN',\n",
" },\n",
" }\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"logging.info(\"Getting started with DoWhy. Running notebook...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. \n",
"\n",
"Beta is the true causal effect. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"data = dowhy.datasets.linear_dataset(beta=10,\n",
" num_common_causes=5,\n",
" num_instruments = 2,\n",
" num_effect_modifiers=1,\n",
" num_samples=5000, \n",
" treatment_is_binary=True,\n",
" stddev_treatment_noise=10,\n",
" num_discrete_common_causes=1)\n",
"df = data[\"df\"]\n",
"print(df.head())\n",
"print(data[\"dot_graph\"])\n",
"print(\"\\n\")\n",
"print(data[\"gml_graph\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 1 (recommended): Input causal graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.\n",
"\n",
"To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html#) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# With graph\n",
"model=CausalModel(\n",
" data = df,\n",
" treatment=data[\"treatment_name\"],\n",
" outcome=data[\"outcome_name\"],\n",
" graph=data[\"gml_graph\"]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify \n",
"the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### DoWhy philosophy: Keep identification and estimation separate\n",
"\n",
"Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.\n",
"It is important to understand that these are orthogonal steps.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the parameter flag *proceed\\_when\\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"causal_estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\")\n",
"print(causal_estimate)\n",
"print(\"Causal Estimate is \" + str(causal_estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the \"target_units\" parameter which can be a string (\"ate\", \"att\", or \"atc\"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify \"effect modifiers\" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Causal effect on the control group (ATC)\n",
"causal_estimate_att = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\",\n",
" target_units = \"atc\")\n",
"print(causal_estimate_att)\n",
"print(\"Causal Estimate is \" + str(causal_estimate_att.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interface 2: Specify common causes and instruments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Without graph \n",
"model= CausalModel( \n",
" data=df, \n",
" treatment=data[\"treatment_name\"], \n",
" outcome=data[\"outcome_name\"], \n",
" common_causes=data[\"common_causes_names\"],\n",
" effect_modifiers=data[\"effect_modifier_names\"]) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.view_model()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We get the same causal graph. Now identification and estimation is done as before.\n",
"\n",
"#### Identification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Estimation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate = model.estimate_effect(identified_estimand,\n",
" method_name=\"backdoor.propensity_score_stratification\") \n",
"print(estimate)\n",
"print(\"Causal Estimate is \" + str(estimate.value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refuting the estimate\n",
"\n",
"Let us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. \n",
"\n",
"Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either \n",
" 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; \n",
" \n",
" a) Random Common Cause\n",
" \n",
" b) Data Subset\n",
" \n",
" \n",
" 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test.\n",
" \n",
" a) Placebo Treatment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding a random common cause variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_random=model.refute_estimate(identified_estimand, estimate, method_name=\"random_common_cause\", show_progress_bar=True)\n",
"print(res_random)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Replacing treatment with a random (placebo) variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_placebo=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"placebo_treatment_refuter\", show_progress_bar=True, placebo_type=\"permute\")\n",
"print(res_placebo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Removing a random subset of the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", show_progress_bar=True, subset_fraction=0.9)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the propensity score stratification estimator is reasonably robust to refutations.\n",
"\n",
"**Reproducability**: For reproducibility, you can add a parameter \"random_seed\" to any refutation method, as shown below.\n",
"\n",
"**Parallelization**: You can also use built-in parallelization to speed up the refutation process. Simply set `n_jobs` to a value greater than 1 to spread the workload to multiple CPUs, or set `n_jobs=-1` to use all CPUs. Currently, this is available only for `random_common_cause`, `placebo_treatment_refuter`, and `data_subset_refuter`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_subset=model.refute_estimate(identified_estimand, estimate,\n",
" method_name=\"data_subset_refuter\", show_progress_bar=True, subset_fraction=0.9, random_seed = 1, n_jobs=-1, verbose=10)\n",
"print(res_subset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding an unobserved common cause variable\n",
"\n",
"This refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. \n",
"\n",
"To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.\n",
"\n",
"Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.\n",
"\n",
"**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02)\n",
"print(res_unobserved)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01)\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.\n",
"\n",
"We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\",\n",
" effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], \n",
" effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02])\n",
"print(res_unobserved_range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name=\"add_unobserved_common_cause\",\n",
" confounders_effect_on_treatment=\"binary_flip\", confounders_effect_on_outcome=\"linear\")\n",
"print(res_unobserved_auto)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Conclusion**: Assuming that the unobserved confounder does not affect the treatment or outcome more strongly than any observed confounder, the causal effect can be concluded to be positive."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| yemaedahrav | a38a03f8b2daa0627833821e92c1b9117925e331 | f94638633ef23064c69c261c324cc4739d036ac0 | can you add the show_progress_bar=True to random common cause and placebo treatment refuter too? | amit-sharma | 362 |
py-why/dowhy | 567 | fixed warnings and progrss bar improvements | Fixed the warnings arising in propensity score estimators and added optional progress bars for refuters
Signed-off-by: Amey Varhade <[email protected]> | null | 2022-07-20 18:03:07+00:00 | 2022-08-19 03:36:42+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import numpy as np
import pandas as pd
import scipy.stats
import math
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from dowhy.causal_refuter import CausalRefutation
from dowhy.causal_refuter import CausalRefuter
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
self.effect_on_y = kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
self.frac_strength_outcome = kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
self.simulated_method_name = kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
self.plotmethod = kwargs['plotmethod'] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if 'percent_change_estimate' in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
self.benchmark_common_causes = kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t = 10):
""" Infer default effect strength of simulated confounder on treatment.
"""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names)>0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError("There needs to be at least one common cause to" +
"automatically compute the default value of kappa_t."+
" Provide a value for kappa_t")
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:,i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:,i] = oldval
flips.append(np.sum(abs(tcap-tpred))/tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + self.effect_on_t +
"' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff,
self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff)/len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(
min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self,
min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y = 10):
""" Infer default effect strength of simulated confounder on treatment.
"""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names)>0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError("There needs to be at least one common cause to" +
"automatically compute the default value of kappa_y."+
" Provide a value for kappa_y")
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:,i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:,i] = oldval
flips.append(np.sum(abs(ycap-ypred))/ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + self.effect_on_y +
"' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff,
self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff)/len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(
min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not(isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if(len(self._estimate.estimator._effect_modifier_names) > 0):
raise NotImplementedError("The current implementation does not support effect modifiers")
if(self.frac_strength_outcome == 1):
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(estimator = self._estimate.estimator,
data = self._data, treatment_name = self._treatment_name,
percent_change_estimate = self.percent_change_estimate, significance_level = self.significance_level, benchmark_common_causes= self.benchmark_common_causes, null_hypothesis_effect = self.null_hypothesis_effect,
frac_strength_treatment = self.frac_strength_treatment, frac_strength_outcome = self.frac_strength_outcome, common_causes_order = self._estimate.estimator._observed_common_causes.columns)
analyzer.check_sensitivity(plot = self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(self.kappa_y, (list,np.ndarray)): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(self.kappa_y, (list, np.ndarray)): # Deal with range inputs
# Get a 2D matrix of values
#x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(self.kappa_t),len(self.kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in range(len(self.kappa_t)):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe/4.0, oe/2.0, (3.0/4)*oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod=="contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix,
levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels==oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp,[cp.levels[trueeffect_index]],inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod=="colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix,
shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title('Effect of Unobserved Common Cause')
ax.set_ylabel('Value of Linear Constant on Treatment')
ax.set_xlabel('Value of Linear Constant on Outcome')
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in range(0,len(self.kappa_t)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle='--',color="gray")
ax.set_title('Effect of Unobserved Common Cause')
ax.set_xlabel('Value of Linear Constant on Treatment')
ax.set_ylabel('Estimated Effect after adding the common cause')
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in range(0, len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle='--',color="gray")
ax.set_title('Effect of Unobserved Common Cause')
ax.set_xlabel('Value of Linear Constant on Outcome')
ax.set_ylabel('Estimated Effect after adding the common cause')
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2*kappa_t-1 if kappa_t >=0.5 else 1-2*kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >=0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name ] = 1- new_data.loc[rel_interval <= w_random, self._treatment_name]
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: 'bool'}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(shape=(num_rows,1), buffer=confounder_t_effect)
else:
raise NotImplementedError("'" + self.effect_on_t + "' method not supported for confounders' effect on treatment")
if self.effect_on_y == "binary_flip":
alpha = 2*kappa_y-1 if kappa_y >=0.5 else 1-2*kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >=0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name ] = 1- new_data.loc[rel_interval <= w_random, self._outcome_name]
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: 'bool'}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(shape=(num_rows,1), buffer=confounder_y_effect)
else:
raise NotImplementedError("'" + self.effect_on_y+ "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(self, convergence_threshold = 0.1, c_star_max = 1000):
'''
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
'''
#Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
#Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
#Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y,X.astype('float'))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
#Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype('int64')
X = self._data[observed_variables]
model = sm.OLS(t,X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
#Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y*correlation_t
if product_cor_metric_current>=product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
#The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
#Choosing a c_star based on the data.
#The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max/10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y*correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index<len(correlation_y_list):
if (correlation_y_list[index]-correlation_y_list[index-1])<=convergence_threshold:
c_star = x_list[index]
break
index = index+1
#Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
#which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
#c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
#initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star/0.05
while i<=threshold:
c2 = i
c1 = c_star/c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y*correlation_t
if min_distance_between_product_cor_metrics>=abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = (correlation_y_observed/correlation_t_observed)
if ((c1/c2) <= (additional_condition + 0.3*additional_condition)) and ((c1/c2) >= (additional_condition - 0.3*additional_condition)): #choose minimum positive value
c1_final = c1
c2_final = c2
i = i*1.5
'''#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2'''
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
'''
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
'''
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1*d_y[j]+c2*d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U,X)
results = model.fit()
U = U.reshape(-1, )
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import numpy as np
import pandas as pd
import scipy.stats
from tqdm.auto import tqdm
import math
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from dowhy.causal_refuter import CausalRefutation
from dowhy.causal_refuter import CausalRefuter
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
self.effect_on_y = kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
self.frac_strength_outcome = kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
self.simulated_method_name = kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
self.plotmethod = kwargs['plotmethod'] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if 'percent_change_estimate' in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
self.benchmark_common_causes = kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t = 10):
""" Infer default effect strength of simulated confounder on treatment.
"""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names)>0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError("There needs to be at least one common cause to" +
"automatically compute the default value of kappa_t."+
" Provide a value for kappa_t")
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:,i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:,i] = oldval
flips.append(np.sum(abs(tcap-tpred))/tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + self.effect_on_t +
"' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff,
self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff)/len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(
min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self,
min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y = 10):
""" Infer default effect strength of simulated confounder on treatment.
"""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names)>0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError("There needs to be at least one common cause to" +
"automatically compute the default value of kappa_y."+
" Provide a value for kappa_y")
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:,i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:,i] = oldval
flips.append(np.sum(abs(ycap-ypred))/ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + self.effect_on_y +
"' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff,
self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff)/len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(
min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not(isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if(len(self._estimate.estimator._effect_modifier_names) > 0):
raise NotImplementedError("The current implementation does not support effect modifiers")
if(self.frac_strength_outcome == 1):
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(estimator = self._estimate.estimator,
data = self._data, treatment_name = self._treatment_name,
percent_change_estimate = self.percent_change_estimate, significance_level = self.significance_level, benchmark_common_causes= self.benchmark_common_causes, null_hypothesis_effect = self.null_hypothesis_effect,
frac_strength_treatment = self.frac_strength_treatment, frac_strength_outcome = self.frac_strength_outcome, common_causes_order = self._estimate.estimator._observed_common_causes.columns)
analyzer.check_sensitivity(plot = self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(self.kappa_y, (list,np.ndarray)): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(self.kappa_y, (list, np.ndarray)): # Deal with range inputs
# Get a 2D matrix of values
#x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(self.kappa_t),len(self.kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(range(len(self.kappa_t)), colour=CausalRefuter.PROGRESS_BAR_COLOR, disable = not show_progress_bar, desc="Refuting Estimates: "):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe/4.0, oe/2.0, (3.0/4)*oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod=="contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix,
levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels==oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp,[cp.levels[trueeffect_index]],inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod=="colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix,
shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title('Effect of Unobserved Common Cause')
ax.set_ylabel('Value of Linear Constant on Treatment')
ax.set_xlabel('Value of Linear Constant on Outcome')
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(range(0,len(self.kappa_t)), colour=CausalRefuter.PROGRESS_BAR_COLOR, disable = not show_progress_bar, desc="Refuting Estimates: "):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle='--',color="gray")
ax.set_title('Effect of Unobserved Common Cause')
ax.set_xlabel('Value of Linear Constant on Treatment')
ax.set_ylabel('Estimated Effect after adding the common cause')
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(range(0,len(self.kappa_y)), colour=CausalRefuter.PROGRESS_BAR_COLOR, disable = not show_progress_bar, desc="Refuting Estimates: "):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(self._estimate.value, new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause")
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6,5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle='--',color="gray")
ax.set_title('Effect of Unobserved Common Cause')
ax.set_xlabel('Value of Linear Constant on Outcome')
ax.set_ylabel('Estimated Effect after adding the common cause')
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2*kappa_t-1 if kappa_t >=0.5 else 1-2*kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >=0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name ] = 1- new_data.loc[rel_interval <= w_random, self._treatment_name]
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: 'bool'}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(shape=(num_rows,1), buffer=confounder_t_effect)
else:
raise NotImplementedError("'" + self.effect_on_t + "' method not supported for confounders' effect on treatment")
if self.effect_on_y == "binary_flip":
alpha = 2*kappa_y-1 if kappa_y >=0.5 else 1-2*kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >=0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name ] = 1- new_data.loc[rel_interval <= w_random, self._outcome_name]
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: 'bool'}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(shape=(num_rows,1), buffer=confounder_y_effect)
else:
raise NotImplementedError("'" + self.effect_on_y+ "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(self, convergence_threshold = 0.1, c_star_max = 1000):
'''
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
'''
#Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
#Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
#Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y,X.astype('float'))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
#Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype('int64')
X = self._data[observed_variables]
model = sm.OLS(t,X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
#Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y*correlation_t
if product_cor_metric_current>=product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
#The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
#Choosing a c_star based on the data.
#The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max/10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y*correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index<len(correlation_y_list):
if (correlation_y_list[index]-correlation_y_list[index-1])<=convergence_threshold:
c_star = x_list[index]
break
index = index+1
#Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
#which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
#c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
#initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star/0.05
while i<=threshold:
c2 = i
c1 = c_star/c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y*correlation_t
if min_distance_between_product_cor_metrics>=abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = (correlation_y_observed/correlation_t_observed)
if ((c1/c2) <= (additional_condition + 0.3*additional_condition)) and ((c1/c2) >= (additional_condition - 0.3*additional_condition)): #choose minimum positive value
c1_final = c1
c2_final = c2
i = i*1.5
'''#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2'''
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
'''
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
'''
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1*d_y[j]+c2*d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U,X)
results = model.fit()
U = U.reshape(-1, )
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| yemaedahrav | a38a03f8b2daa0627833821e92c1b9117925e331 | f94638633ef23064c69c261c324cc4739d036ac0 | we need tqdm.notebook only if user is running the code in a notebook. Otherwise we need tqdm.
better to use tqdm.auto. see this example: https://github.com/tqdm/tqdm/pull/1218 | amit-sharma | 363 |
py-why/dowhy | 567 | fixed warnings and progrss bar improvements | Fixed the warnings arising in propensity score estimators and added optional progress bars for refuters
Signed-off-by: Amey Varhade <[email protected]> | null | 2022-07-20 18:03:07+00:00 | 2022-08-19 03:36:42+00:00 | dowhy/causal_refuters/data_subset_refuter.py | import numpy as np
import logging
from joblib import Parallel, delayed
from dowhy.causal_refuter import CausalRefuter, CausalRefutation
from dowhy.causal_estimator import CausalEstimator
class DataSubsetRefuter(CausalRefuter):
"""Refute an estimate by rerunning it on a random subset of the original data.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param subset_fraction: Fraction of the data to be used for re-estimation, which is ``DataSubsetRefuter.DEFAULT_SUBSET_FRACTION`` by default.
:type subset_fraction: float, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we with to repeat the same behavior we push the same seed in the psuedo-random generator
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
# The default subset of the data to be used
DEFAULT_SUBSET_FRACTION = 0.8
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._subset_fraction = kwargs.pop("subset_fraction", 0.8)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS )
self._random_state = kwargs.pop("random_state",None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self):
sample_estimates = np.zeros(self._num_simulations)
self.logger.info("Refutation over {} simulated datasets of size {} each"
.format(self._subset_fraction
,self._subset_fraction*len(self._data.index) )
)
def refute_once():
if self._random_state is None:
new_data = self._data.sample(frac=self._subset_fraction)
else:
new_data = self._data.sample(frac=self._subset_fraction,
random_state=self._random_state)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
# Run refutation in parallel
sample_estimates = Parallel(
n_jobs=self._n_jobs,
verbose=self._verbose
)(delayed(refute_once)() for _ in range(self._num_simulations))
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
self._estimate.value,
np.mean(sample_estimates),
refutation_type="Refute: Use a subset of data"
)
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as choosing a subset should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(
self.test_significance(self._estimate, sample_estimates)
)
refute.add_refuter(self)
return refute
| import numpy as np
import logging
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_refuter import CausalRefuter, CausalRefutation
from dowhy.causal_estimator import CausalEstimator
class DataSubsetRefuter(CausalRefuter):
"""Refute an estimate by rerunning it on a random subset of the original data.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param subset_fraction: Fraction of the data to be used for re-estimation, which is ``DataSubsetRefuter.DEFAULT_SUBSET_FRACTION`` by default.
:type subset_fraction: float, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we with to repeat the same behavior we push the same seed in the psuedo-random generator
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
# The default subset of the data to be used
DEFAULT_SUBSET_FRACTION = 0.8
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._subset_fraction = kwargs.pop("subset_fraction", 0.8)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS )
self._random_state = kwargs.pop("random_state",None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
sample_estimates = np.zeros(self._num_simulations)
self.logger.info("Refutation over {} simulated datasets of size {} each"
.format(self._subset_fraction
,self._subset_fraction*len(self._data.index) )
)
def refute_once():
if self._random_state is None:
new_data = self._data.sample(frac=self._subset_fraction)
else:
new_data = self._data.sample(frac=self._subset_fraction,
random_state=self._random_state)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
# Run refutation in parallel
sample_estimates = Parallel(
n_jobs=self._n_jobs,
verbose=self._verbose
)(delayed(refute_once)() for _ in tqdm(range(self._num_simulations), colour=CausalRefuter.PROGRESS_BAR_COLOR, disable = not show_progress_bar, desc="Refuting Estimates: "))
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
self._estimate.value,
np.mean(sample_estimates),
refutation_type="Refute: Use a subset of data"
)
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as choosing a subset should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(
self.test_significance(self._estimate, sample_estimates)
)
refute.add_refuter(self)
return refute
| yemaedahrav | a38a03f8b2daa0627833821e92c1b9117925e331 | f94638633ef23064c69c261c324cc4739d036ac0 | better to define "green" as a static variable inside causalrefuter class. and then call it here as CausalRefuter.Progress_Bar_Color | amit-sharma | 364 |
py-why/dowhy | 564 | Small correction in estimate_ftest_pvalue | The error for the model without inputs should be based on the difference of the observed values to the mean in the training data set. However, the mean of the test set was used instead. | null | 2022-07-19 22:11:02+00:00 | 2022-08-16 13:11:57+00:00 | dowhy/gcm/stats.py | """Functions in this module should be considered experimental, meaning there might be breaking API changes in the
future.
"""
from typing import Union, List, Optional, Callable
import numpy as np
from numpy.matlib import repmat
from scipy import stats
from sklearn.linear_model import LinearRegression
from dowhy.gcm.constant import EPS
from dowhy.gcm.util.general import shape_into_2d
def quantile_based_fwer(p_values: Union[np.ndarray, List[float]],
p_values_scaling: Optional[np.ndarray] = None,
quantile: float = 0.5) -> float:
"""Applies a quantile based family wise error rate (FWER) control to the given p-values. This is based on the
approach described in:
Meinshausen, N., Meier, L. and Buehlmann, P. (2009).
p-values for high-dimensional regression. J. Amer. Statist. Assoc.104 1671–1681
:param p_values: A list or array of p-values.
:param p_values_scaling: An optional list of scaling factors for each p-value.
:param quantile: The quantile used for the p-value adjustment. By default, this is the median (0.5).
:return: The p-value that lies on the quantile threshold. Note that this is the quantile based on scaled values
p_values / quantile.
"""
if quantile <= 0 or abs(quantile - 1) >= 1:
raise ValueError("The given quantile is %f, but it needs to be on (0, 1]!" % quantile)
p_values = np.array(p_values)
if p_values_scaling is None:
p_values_scaling = np.ones(p_values.shape[0])
if p_values.shape != p_values_scaling.shape:
raise ValueError("The p-value scaling array needs to have the same dimension as the given p-values.")
p_values_scaling = p_values_scaling[~np.isnan(p_values)]
p_values = p_values[~np.isnan(p_values)]
p_values = p_values * p_values_scaling
p_values[p_values > 1] = 1.0
if p_values.shape[0] == 1:
return float(p_values[0])
else:
return float(min(1.0, np.quantile(p_values / quantile, quantile)))
def marginal_expectation(prediction_method: Callable[[np.ndarray], np.ndarray],
feature_samples: np.ndarray,
baseline_samples: np.ndarray,
baseline_feature_indices: List[int],
return_averaged_results: bool = True,
feature_perturbation: str = 'randomize_columns_jointly',
max_batch_size: int = -1) -> np.ndarray:
""" Estimates the marginal expectation for samples in baseline_noise_samples when randomizing features that are not
part of baseline_feature_indices. This is, this function estimates
y^i = E[Y | do(x^i_s)] := \\int_x_s' E[Y | x^i_s, x_s'] p(x_s') d x_s',
where x^i_s is the i-th sample from baseline_noise_samples, s denotes the baseline_feature_indices and
x_s' ~ X_s' denotes the randomized features that are not in s. For an approximation of the integral, the given
prediction_method is evaluated multiple times for the same x^i_s, but different x_s' ~ X_s'.
:param prediction_method: Prediction method of interest. This should expect a numpy array as input for making
predictions.
:param feature_samples: Samples from the joint distribution. These are used for randomizing the features that are not in
baseline_feature_indices.
:param baseline_samples: Samples for which the marginal expectation should be estimated.
:param baseline_feature_indices: Column indices of the features in s. These values for these features are remain constant
when estimating the expectation.
:param return_averaged_results: If set to True, the expectation over all evaluated samples for the i-th
baseline_noise_samples is returned. If set to False, all corresponding results for the i-th sample are returned.
:param feature_perturbation: Type of feature permutation:
'randomize_columns_independently': Each feature not in s is randomly permuted separately.
'randomize_columns_jointly': All features not in s are jointly permuted. Note that this still represents an
interventional distribution.
:param max_batch_size: Maximum batch size for a estimating the predictions. This has a significant influence on the
overall memory usage. If set to -1, all samples are used in one batch.
:return: If return_averaged_results is False, a numpy array where the i-th entry belongs to the marginal expectation
of x^i_s when randomizing the remaining features.
If return_averaged_results is True, a two dimensional numpy array where the i-th entry contains all
predictions for x^i_s when randomizing the remaining features.
"""
feature_samples, baseline_samples = shape_into_2d(feature_samples, baseline_samples)
batch_size = baseline_samples.shape[0] if max_batch_size == -1 else max_batch_size
result = [np.nan] * baseline_samples.shape[0]
# Make copy to avoid manipulating the original matrix.
feature_samples = np.array(feature_samples)
features_to_randomize = np.delete(np.arange(0, feature_samples.shape[1]), baseline_feature_indices)
if feature_perturbation == 'randomize_columns_independently':
feature_samples = permute_features(feature_samples, features_to_randomize, False)
elif feature_perturbation == 'randomize_columns_jointly':
feature_samples = permute_features(feature_samples, features_to_randomize, True)
else:
raise ValueError("Unknown argument %s as feature_perturbation type!" % feature_perturbation)
# The given prediction method has to be evaluated multiple times on a large amount of different inputs. Typically,
# the batch evaluation of a prediction model on multiple inputs at the same time is significantly faster
# than evaluating it on single simples in a for-loop. To make use of this, we try to evaluate as many samples as
# possible in one batch call of the prediction method. However, this also requires a lot of memory for many samples.
# To overcome potential memory issues, multiple batch calls are performed, each with at most batch_size many
# samples. The number of samples that are evaluated is normally
# baseline_noise_samples.shape[0] * feature_samples.shape[0]. Here, we reduce it to
# batch_size * feature_samples.shape[0]. If the batch_size would be set 1, then each baseline_noise_samples is
# evaluated one by one in a for-loop.
inputs = repmat(feature_samples, batch_size, 1)
for offset in range(0, baseline_samples.shape[0], batch_size):
# Each batch consist of at most batch_size * feature_samples.shape[0] many samples. If there are multiple
# batches, the offset indicates the index of the current baseline_noise_samples that has not been evaluated yet.
if offset + batch_size > baseline_samples.shape[0]:
# If the batch size would be larger than the remaining amount of samples, it is reduced to only include the
# remaining baseline_noise_samples.
adjusted_batch_size = baseline_samples.shape[0] - offset
inputs = inputs[:adjusted_batch_size * feature_samples.shape[0]]
else:
adjusted_batch_size = batch_size
for index in range(adjusted_batch_size):
# The inputs consist of batch_size many copies of feature_samples. Here, we set the columns of the features
# in baseline_feature_indices to their respective values in baseline_noise_samples.
inputs[index * feature_samples.shape[0]:(index + 1) * feature_samples.shape[0],
baseline_feature_indices] = baseline_samples[offset + index, baseline_feature_indices]
# After creating the (potentially large) input data matrix, we can evaluate the prediction method.
predictions = np.array(prediction_method(inputs))
for index in range(adjusted_batch_size):
# Here, offset + index now indicates the sample index in baseline_noise_samples.
if return_averaged_results:
# This would average all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples. This is, y^(offset + index) = E[Y | do(x^(offset + index)_s)].
result[offset + index] = np.mean(predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]], axis=0)
else:
# This would return all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples, i.e. the results are not averaged.
result[offset + index] = predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]]
return np.array(result)
def permute_features(feature_samples: np.ndarray,
features_to_permute: Union[List[int], np.ndarray],
randomize_features_jointly: bool) -> np.ndarray:
# Making copy to ensure that the original object is not modified.
feature_samples = np.array(feature_samples)
if randomize_features_jointly:
# Permute samples jointly. This still represents an interventional distribution.
feature_samples[:, features_to_permute] \
= feature_samples[np.random.choice(feature_samples.shape[0],
feature_samples.shape[0],
replace=False)][:, features_to_permute]
else:
# Permute samples independently.
for feature in features_to_permute:
np.random.shuffle(feature_samples[:, feature])
return feature_samples
def estimate_ftest_pvalue(X_training_a: np.ndarray,
X_training_b: np.ndarray,
Y_training: np.ndarray,
X_test_a: np.ndarray,
X_test_b: np.ndarray,
Y_test: np.ndarray) -> float:
"""Estimates the p-value for the null hypothesis that the same regression error with less parameters can be
achieved. This is, a linear model trained on a data set A with d number of features has the same performance
(in terms of squared error) relative to the number of features as a model trained on a data set B with k number
features, where k < d. Here, both data sets need to have the same target values. A small p-value would
indicate that the model performances are significantly different.
Note that all given test samples are utilized in the f-test.
See https://en.wikipedia.org/wiki/F-test#Regression_problems for more details.
:param X_training_a: Input training samples for model A.
:param X_training_b: Input training samples for model B. These samples should have less features than samples in X_training_a.
:param Y_training: Target training values.
:param X_test_a: Test samples for model A.
:param X_test_b: Test samples for model B.
:param Y_test: Test values.
:return: A p-value on [0, 1].
"""
X_training_a, X_test_a = shape_into_2d(X_training_a, X_test_a)
if X_training_b.size > 0:
X_training_b, X_test_b = shape_into_2d(X_training_b, X_test_b)
else:
X_training_b = X_training_b.reshape(0, 0)
X_test_b = X_test_b.reshape(0, 0)
if X_training_a.shape[1] <= X_training_b.shape[1]:
raise ValueError("The data X_training_a should have more dimensions (model parameters) than the data "
"X_training_b!")
ssr_a = np.sum(
(Y_test - LinearRegression().fit(X_training_a, Y_training).predict(X_test_a)) ** 2)
if X_training_b.shape[1] > 0:
ssr_b = np.sum(
(Y_test - LinearRegression().fit(X_training_b, Y_training).predict(X_test_b)) ** 2)
else:
ssr_b = np.sum((Y_test - np.mean(Y_test)) ** 2)
dof_diff_1 = (X_test_a.shape[1] - X_test_b.shape[1]) # p1 - p2
dof_diff_2 = (X_test_a.shape[0] - X_test_a.shape[1] - 1) # n - p2 (parameters include intercept)
f_statistic = (ssr_b - ssr_a) / dof_diff_1 * dof_diff_2
if ssr_a < EPS:
ssr_a = 0
if ssr_b < EPS:
ssr_b = 0
if ssr_a == 0 and ssr_b == 0:
f_statistic = 0
elif ssr_a != 0:
f_statistic /= ssr_a
return stats.f.sf(f_statistic, dof_diff_1, dof_diff_2)
| """Functions in this module should be considered experimental, meaning there might be breaking API changes in the
future.
"""
from typing import Union, List, Optional, Callable
import numpy as np
from numpy.matlib import repmat
from scipy import stats
from sklearn.linear_model import LinearRegression
from dowhy.gcm.constant import EPS
from dowhy.gcm.util.general import shape_into_2d
def quantile_based_fwer(p_values: Union[np.ndarray, List[float]],
p_values_scaling: Optional[np.ndarray] = None,
quantile: float = 0.5) -> float:
"""Applies a quantile based family wise error rate (FWER) control to the given p-values. This is based on the
approach described in:
Meinshausen, N., Meier, L. and Buehlmann, P. (2009).
p-values for high-dimensional regression. J. Amer. Statist. Assoc.104 1671–1681
:param p_values: A list or array of p-values.
:param p_values_scaling: An optional list of scaling factors for each p-value.
:param quantile: The quantile used for the p-value adjustment. By default, this is the median (0.5).
:return: The p-value that lies on the quantile threshold. Note that this is the quantile based on scaled values
p_values / quantile.
"""
if quantile <= 0 or abs(quantile - 1) >= 1:
raise ValueError("The given quantile is %f, but it needs to be on (0, 1]!" % quantile)
p_values = np.array(p_values)
if p_values_scaling is None:
p_values_scaling = np.ones(p_values.shape[0])
if p_values.shape != p_values_scaling.shape:
raise ValueError("The p-value scaling array needs to have the same dimension as the given p-values.")
p_values_scaling = p_values_scaling[~np.isnan(p_values)]
p_values = p_values[~np.isnan(p_values)]
p_values = p_values * p_values_scaling
p_values[p_values > 1] = 1.0
if p_values.shape[0] == 1:
return float(p_values[0])
else:
return float(min(1.0, np.quantile(p_values / quantile, quantile)))
def marginal_expectation(prediction_method: Callable[[np.ndarray], np.ndarray],
feature_samples: np.ndarray,
baseline_samples: np.ndarray,
baseline_feature_indices: List[int],
return_averaged_results: bool = True,
feature_perturbation: str = 'randomize_columns_jointly',
max_batch_size: int = -1) -> np.ndarray:
""" Estimates the marginal expectation for samples in baseline_noise_samples when randomizing features that are not
part of baseline_feature_indices. This is, this function estimates
y^i = E[Y | do(x^i_s)] := \\int_x_s' E[Y | x^i_s, x_s'] p(x_s') d x_s',
where x^i_s is the i-th sample from baseline_noise_samples, s denotes the baseline_feature_indices and
x_s' ~ X_s' denotes the randomized features that are not in s. For an approximation of the integral, the given
prediction_method is evaluated multiple times for the same x^i_s, but different x_s' ~ X_s'.
:param prediction_method: Prediction method of interest. This should expect a numpy array as input for making
predictions.
:param feature_samples: Samples from the joint distribution. These are used for randomizing the features that are not in
baseline_feature_indices.
:param baseline_samples: Samples for which the marginal expectation should be estimated.
:param baseline_feature_indices: Column indices of the features in s. These values for these features are remain constant
when estimating the expectation.
:param return_averaged_results: If set to True, the expectation over all evaluated samples for the i-th
baseline_noise_samples is returned. If set to False, all corresponding results for the i-th sample are returned.
:param feature_perturbation: Type of feature permutation:
'randomize_columns_independently': Each feature not in s is randomly permuted separately.
'randomize_columns_jointly': All features not in s are jointly permuted. Note that this still represents an
interventional distribution.
:param max_batch_size: Maximum batch size for a estimating the predictions. This has a significant influence on the
overall memory usage. If set to -1, all samples are used in one batch.
:return: If return_averaged_results is False, a numpy array where the i-th entry belongs to the marginal expectation
of x^i_s when randomizing the remaining features.
If return_averaged_results is True, a two dimensional numpy array where the i-th entry contains all
predictions for x^i_s when randomizing the remaining features.
"""
feature_samples, baseline_samples = shape_into_2d(feature_samples, baseline_samples)
batch_size = baseline_samples.shape[0] if max_batch_size == -1 else max_batch_size
result = [np.nan] * baseline_samples.shape[0]
# Make copy to avoid manipulating the original matrix.
feature_samples = np.array(feature_samples)
features_to_randomize = np.delete(np.arange(0, feature_samples.shape[1]), baseline_feature_indices)
if feature_perturbation == 'randomize_columns_independently':
feature_samples = permute_features(feature_samples, features_to_randomize, False)
elif feature_perturbation == 'randomize_columns_jointly':
feature_samples = permute_features(feature_samples, features_to_randomize, True)
else:
raise ValueError("Unknown argument %s as feature_perturbation type!" % feature_perturbation)
# The given prediction method has to be evaluated multiple times on a large amount of different inputs. Typically,
# the batch evaluation of a prediction model on multiple inputs at the same time is significantly faster
# than evaluating it on single simples in a for-loop. To make use of this, we try to evaluate as many samples as
# possible in one batch call of the prediction method. However, this also requires a lot of memory for many samples.
# To overcome potential memory issues, multiple batch calls are performed, each with at most batch_size many
# samples. The number of samples that are evaluated is normally
# baseline_noise_samples.shape[0] * feature_samples.shape[0]. Here, we reduce it to
# batch_size * feature_samples.shape[0]. If the batch_size would be set 1, then each baseline_noise_samples is
# evaluated one by one in a for-loop.
inputs = repmat(feature_samples, batch_size, 1)
for offset in range(0, baseline_samples.shape[0], batch_size):
# Each batch consist of at most batch_size * feature_samples.shape[0] many samples. If there are multiple
# batches, the offset indicates the index of the current baseline_noise_samples that has not been evaluated yet.
if offset + batch_size > baseline_samples.shape[0]:
# If the batch size would be larger than the remaining amount of samples, it is reduced to only include the
# remaining baseline_noise_samples.
adjusted_batch_size = baseline_samples.shape[0] - offset
inputs = inputs[:adjusted_batch_size * feature_samples.shape[0]]
else:
adjusted_batch_size = batch_size
for index in range(adjusted_batch_size):
# The inputs consist of batch_size many copies of feature_samples. Here, we set the columns of the features
# in baseline_feature_indices to their respective values in baseline_noise_samples.
inputs[index * feature_samples.shape[0]:(index + 1) * feature_samples.shape[0],
baseline_feature_indices] = baseline_samples[offset + index, baseline_feature_indices]
# After creating the (potentially large) input data matrix, we can evaluate the prediction method.
predictions = np.array(prediction_method(inputs))
for index in range(adjusted_batch_size):
# Here, offset + index now indicates the sample index in baseline_noise_samples.
if return_averaged_results:
# This would average all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples. This is, y^(offset + index) = E[Y | do(x^(offset + index)_s)].
result[offset + index] = np.mean(predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]], axis=0)
else:
# This would return all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples, i.e. the results are not averaged.
result[offset + index] = predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]]
return np.array(result)
def permute_features(feature_samples: np.ndarray,
features_to_permute: Union[List[int], np.ndarray],
randomize_features_jointly: bool) -> np.ndarray:
# Making copy to ensure that the original object is not modified.
feature_samples = np.array(feature_samples)
if randomize_features_jointly:
# Permute samples jointly. This still represents an interventional distribution.
feature_samples[:, features_to_permute] \
= feature_samples[np.random.choice(feature_samples.shape[0],
feature_samples.shape[0],
replace=False)][:, features_to_permute]
else:
# Permute samples independently.
for feature in features_to_permute:
np.random.shuffle(feature_samples[:, feature])
return feature_samples
def estimate_ftest_pvalue(X_training_a: np.ndarray,
X_training_b: np.ndarray,
Y_training: np.ndarray,
X_test_a: np.ndarray,
X_test_b: np.ndarray,
Y_test: np.ndarray) -> float:
"""Estimates the p-value for the null hypothesis that the same regression error with less parameters can be
achieved. This is, a linear model trained on a data set A with d number of features has the same performance
(in terms of squared error) relative to the number of features as a model trained on a data set B with k number
features, where k < d. Here, both data sets need to have the same target values. A small p-value would
indicate that the model performances are significantly different.
Note that all given test samples are utilized in the f-test.
See https://en.wikipedia.org/wiki/F-test#Regression_problems for more details.
:param X_training_a: Input training samples for model A.
:param X_training_b: Input training samples for model B. These samples should have less features than samples in X_training_a.
:param Y_training: Target training values.
:param X_test_a: Test samples for model A.
:param X_test_b: Test samples for model B.
:param Y_test: Test values.
:return: A p-value on [0, 1].
"""
X_training_a, X_test_a = shape_into_2d(X_training_a, X_test_a)
if X_training_b.size > 0:
X_training_b, X_test_b = shape_into_2d(X_training_b, X_test_b)
else:
X_training_b = X_training_b.reshape(0, 0)
X_test_b = X_test_b.reshape(0, 0)
if X_training_a.shape[1] <= X_training_b.shape[1]:
raise ValueError("The data X_training_a should have more dimensions (model parameters) than the data "
"X_training_b!")
ssr_a = np.sum(
(Y_test - LinearRegression().fit(X_training_a, Y_training).predict(X_test_a)) ** 2)
if X_training_b.shape[1] > 0:
ssr_b = np.sum(
(Y_test - LinearRegression().fit(X_training_b, Y_training).predict(X_test_b)) ** 2)
else:
ssr_b = np.sum((Y_test - np.mean(Y_training)) ** 2)
dof_diff_1 = (X_test_a.shape[1] - X_test_b.shape[1]) # p1 - p2
dof_diff_2 = (X_test_a.shape[0] - X_test_a.shape[1] - 1) # n - p2 (parameters include intercept)
f_statistic = (ssr_b - ssr_a) / dof_diff_1 * dof_diff_2
if ssr_a < EPS:
ssr_a = 0
if ssr_b < EPS:
ssr_b = 0
if ssr_a == 0 and ssr_b == 0:
f_statistic = 0
elif ssr_a != 0:
f_statistic /= ssr_a
return stats.f.sf(f_statistic, dof_diff_1, dof_diff_2)
| bloebp | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | fe22abab886c5aea768b8474295999a71d914be9 | But this is not the error of the model (without inputs) though. Why does this make sense? Can you also point to any articles where this is advocated or advised? | kailashbuki | 365 |
py-why/dowhy | 564 | Small correction in estimate_ftest_pvalue | The error for the model without inputs should be based on the difference of the observed values to the mean in the training data set. However, the mean of the test set was used instead. | null | 2022-07-19 22:11:02+00:00 | 2022-08-16 13:11:57+00:00 | dowhy/gcm/stats.py | """Functions in this module should be considered experimental, meaning there might be breaking API changes in the
future.
"""
from typing import Union, List, Optional, Callable
import numpy as np
from numpy.matlib import repmat
from scipy import stats
from sklearn.linear_model import LinearRegression
from dowhy.gcm.constant import EPS
from dowhy.gcm.util.general import shape_into_2d
def quantile_based_fwer(p_values: Union[np.ndarray, List[float]],
p_values_scaling: Optional[np.ndarray] = None,
quantile: float = 0.5) -> float:
"""Applies a quantile based family wise error rate (FWER) control to the given p-values. This is based on the
approach described in:
Meinshausen, N., Meier, L. and Buehlmann, P. (2009).
p-values for high-dimensional regression. J. Amer. Statist. Assoc.104 1671–1681
:param p_values: A list or array of p-values.
:param p_values_scaling: An optional list of scaling factors for each p-value.
:param quantile: The quantile used for the p-value adjustment. By default, this is the median (0.5).
:return: The p-value that lies on the quantile threshold. Note that this is the quantile based on scaled values
p_values / quantile.
"""
if quantile <= 0 or abs(quantile - 1) >= 1:
raise ValueError("The given quantile is %f, but it needs to be on (0, 1]!" % quantile)
p_values = np.array(p_values)
if p_values_scaling is None:
p_values_scaling = np.ones(p_values.shape[0])
if p_values.shape != p_values_scaling.shape:
raise ValueError("The p-value scaling array needs to have the same dimension as the given p-values.")
p_values_scaling = p_values_scaling[~np.isnan(p_values)]
p_values = p_values[~np.isnan(p_values)]
p_values = p_values * p_values_scaling
p_values[p_values > 1] = 1.0
if p_values.shape[0] == 1:
return float(p_values[0])
else:
return float(min(1.0, np.quantile(p_values / quantile, quantile)))
def marginal_expectation(prediction_method: Callable[[np.ndarray], np.ndarray],
feature_samples: np.ndarray,
baseline_samples: np.ndarray,
baseline_feature_indices: List[int],
return_averaged_results: bool = True,
feature_perturbation: str = 'randomize_columns_jointly',
max_batch_size: int = -1) -> np.ndarray:
""" Estimates the marginal expectation for samples in baseline_noise_samples when randomizing features that are not
part of baseline_feature_indices. This is, this function estimates
y^i = E[Y | do(x^i_s)] := \\int_x_s' E[Y | x^i_s, x_s'] p(x_s') d x_s',
where x^i_s is the i-th sample from baseline_noise_samples, s denotes the baseline_feature_indices and
x_s' ~ X_s' denotes the randomized features that are not in s. For an approximation of the integral, the given
prediction_method is evaluated multiple times for the same x^i_s, but different x_s' ~ X_s'.
:param prediction_method: Prediction method of interest. This should expect a numpy array as input for making
predictions.
:param feature_samples: Samples from the joint distribution. These are used for randomizing the features that are not in
baseline_feature_indices.
:param baseline_samples: Samples for which the marginal expectation should be estimated.
:param baseline_feature_indices: Column indices of the features in s. These values for these features are remain constant
when estimating the expectation.
:param return_averaged_results: If set to True, the expectation over all evaluated samples for the i-th
baseline_noise_samples is returned. If set to False, all corresponding results for the i-th sample are returned.
:param feature_perturbation: Type of feature permutation:
'randomize_columns_independently': Each feature not in s is randomly permuted separately.
'randomize_columns_jointly': All features not in s are jointly permuted. Note that this still represents an
interventional distribution.
:param max_batch_size: Maximum batch size for a estimating the predictions. This has a significant influence on the
overall memory usage. If set to -1, all samples are used in one batch.
:return: If return_averaged_results is False, a numpy array where the i-th entry belongs to the marginal expectation
of x^i_s when randomizing the remaining features.
If return_averaged_results is True, a two dimensional numpy array where the i-th entry contains all
predictions for x^i_s when randomizing the remaining features.
"""
feature_samples, baseline_samples = shape_into_2d(feature_samples, baseline_samples)
batch_size = baseline_samples.shape[0] if max_batch_size == -1 else max_batch_size
result = [np.nan] * baseline_samples.shape[0]
# Make copy to avoid manipulating the original matrix.
feature_samples = np.array(feature_samples)
features_to_randomize = np.delete(np.arange(0, feature_samples.shape[1]), baseline_feature_indices)
if feature_perturbation == 'randomize_columns_independently':
feature_samples = permute_features(feature_samples, features_to_randomize, False)
elif feature_perturbation == 'randomize_columns_jointly':
feature_samples = permute_features(feature_samples, features_to_randomize, True)
else:
raise ValueError("Unknown argument %s as feature_perturbation type!" % feature_perturbation)
# The given prediction method has to be evaluated multiple times on a large amount of different inputs. Typically,
# the batch evaluation of a prediction model on multiple inputs at the same time is significantly faster
# than evaluating it on single simples in a for-loop. To make use of this, we try to evaluate as many samples as
# possible in one batch call of the prediction method. However, this also requires a lot of memory for many samples.
# To overcome potential memory issues, multiple batch calls are performed, each with at most batch_size many
# samples. The number of samples that are evaluated is normally
# baseline_noise_samples.shape[0] * feature_samples.shape[0]. Here, we reduce it to
# batch_size * feature_samples.shape[0]. If the batch_size would be set 1, then each baseline_noise_samples is
# evaluated one by one in a for-loop.
inputs = repmat(feature_samples, batch_size, 1)
for offset in range(0, baseline_samples.shape[0], batch_size):
# Each batch consist of at most batch_size * feature_samples.shape[0] many samples. If there are multiple
# batches, the offset indicates the index of the current baseline_noise_samples that has not been evaluated yet.
if offset + batch_size > baseline_samples.shape[0]:
# If the batch size would be larger than the remaining amount of samples, it is reduced to only include the
# remaining baseline_noise_samples.
adjusted_batch_size = baseline_samples.shape[0] - offset
inputs = inputs[:adjusted_batch_size * feature_samples.shape[0]]
else:
adjusted_batch_size = batch_size
for index in range(adjusted_batch_size):
# The inputs consist of batch_size many copies of feature_samples. Here, we set the columns of the features
# in baseline_feature_indices to their respective values in baseline_noise_samples.
inputs[index * feature_samples.shape[0]:(index + 1) * feature_samples.shape[0],
baseline_feature_indices] = baseline_samples[offset + index, baseline_feature_indices]
# After creating the (potentially large) input data matrix, we can evaluate the prediction method.
predictions = np.array(prediction_method(inputs))
for index in range(adjusted_batch_size):
# Here, offset + index now indicates the sample index in baseline_noise_samples.
if return_averaged_results:
# This would average all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples. This is, y^(offset + index) = E[Y | do(x^(offset + index)_s)].
result[offset + index] = np.mean(predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]], axis=0)
else:
# This would return all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples, i.e. the results are not averaged.
result[offset + index] = predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]]
return np.array(result)
def permute_features(feature_samples: np.ndarray,
features_to_permute: Union[List[int], np.ndarray],
randomize_features_jointly: bool) -> np.ndarray:
# Making copy to ensure that the original object is not modified.
feature_samples = np.array(feature_samples)
if randomize_features_jointly:
# Permute samples jointly. This still represents an interventional distribution.
feature_samples[:, features_to_permute] \
= feature_samples[np.random.choice(feature_samples.shape[0],
feature_samples.shape[0],
replace=False)][:, features_to_permute]
else:
# Permute samples independently.
for feature in features_to_permute:
np.random.shuffle(feature_samples[:, feature])
return feature_samples
def estimate_ftest_pvalue(X_training_a: np.ndarray,
X_training_b: np.ndarray,
Y_training: np.ndarray,
X_test_a: np.ndarray,
X_test_b: np.ndarray,
Y_test: np.ndarray) -> float:
"""Estimates the p-value for the null hypothesis that the same regression error with less parameters can be
achieved. This is, a linear model trained on a data set A with d number of features has the same performance
(in terms of squared error) relative to the number of features as a model trained on a data set B with k number
features, where k < d. Here, both data sets need to have the same target values. A small p-value would
indicate that the model performances are significantly different.
Note that all given test samples are utilized in the f-test.
See https://en.wikipedia.org/wiki/F-test#Regression_problems for more details.
:param X_training_a: Input training samples for model A.
:param X_training_b: Input training samples for model B. These samples should have less features than samples in X_training_a.
:param Y_training: Target training values.
:param X_test_a: Test samples for model A.
:param X_test_b: Test samples for model B.
:param Y_test: Test values.
:return: A p-value on [0, 1].
"""
X_training_a, X_test_a = shape_into_2d(X_training_a, X_test_a)
if X_training_b.size > 0:
X_training_b, X_test_b = shape_into_2d(X_training_b, X_test_b)
else:
X_training_b = X_training_b.reshape(0, 0)
X_test_b = X_test_b.reshape(0, 0)
if X_training_a.shape[1] <= X_training_b.shape[1]:
raise ValueError("The data X_training_a should have more dimensions (model parameters) than the data "
"X_training_b!")
ssr_a = np.sum(
(Y_test - LinearRegression().fit(X_training_a, Y_training).predict(X_test_a)) ** 2)
if X_training_b.shape[1] > 0:
ssr_b = np.sum(
(Y_test - LinearRegression().fit(X_training_b, Y_training).predict(X_test_b)) ** 2)
else:
ssr_b = np.sum((Y_test - np.mean(Y_test)) ** 2)
dof_diff_1 = (X_test_a.shape[1] - X_test_b.shape[1]) # p1 - p2
dof_diff_2 = (X_test_a.shape[0] - X_test_a.shape[1] - 1) # n - p2 (parameters include intercept)
f_statistic = (ssr_b - ssr_a) / dof_diff_1 * dof_diff_2
if ssr_a < EPS:
ssr_a = 0
if ssr_b < EPS:
ssr_b = 0
if ssr_a == 0 and ssr_b == 0:
f_statistic = 0
elif ssr_a != 0:
f_statistic /= ssr_a
return stats.f.sf(f_statistic, dof_diff_1, dof_diff_2)
| """Functions in this module should be considered experimental, meaning there might be breaking API changes in the
future.
"""
from typing import Union, List, Optional, Callable
import numpy as np
from numpy.matlib import repmat
from scipy import stats
from sklearn.linear_model import LinearRegression
from dowhy.gcm.constant import EPS
from dowhy.gcm.util.general import shape_into_2d
def quantile_based_fwer(p_values: Union[np.ndarray, List[float]],
p_values_scaling: Optional[np.ndarray] = None,
quantile: float = 0.5) -> float:
"""Applies a quantile based family wise error rate (FWER) control to the given p-values. This is based on the
approach described in:
Meinshausen, N., Meier, L. and Buehlmann, P. (2009).
p-values for high-dimensional regression. J. Amer. Statist. Assoc.104 1671–1681
:param p_values: A list or array of p-values.
:param p_values_scaling: An optional list of scaling factors for each p-value.
:param quantile: The quantile used for the p-value adjustment. By default, this is the median (0.5).
:return: The p-value that lies on the quantile threshold. Note that this is the quantile based on scaled values
p_values / quantile.
"""
if quantile <= 0 or abs(quantile - 1) >= 1:
raise ValueError("The given quantile is %f, but it needs to be on (0, 1]!" % quantile)
p_values = np.array(p_values)
if p_values_scaling is None:
p_values_scaling = np.ones(p_values.shape[0])
if p_values.shape != p_values_scaling.shape:
raise ValueError("The p-value scaling array needs to have the same dimension as the given p-values.")
p_values_scaling = p_values_scaling[~np.isnan(p_values)]
p_values = p_values[~np.isnan(p_values)]
p_values = p_values * p_values_scaling
p_values[p_values > 1] = 1.0
if p_values.shape[0] == 1:
return float(p_values[0])
else:
return float(min(1.0, np.quantile(p_values / quantile, quantile)))
def marginal_expectation(prediction_method: Callable[[np.ndarray], np.ndarray],
feature_samples: np.ndarray,
baseline_samples: np.ndarray,
baseline_feature_indices: List[int],
return_averaged_results: bool = True,
feature_perturbation: str = 'randomize_columns_jointly',
max_batch_size: int = -1) -> np.ndarray:
""" Estimates the marginal expectation for samples in baseline_noise_samples when randomizing features that are not
part of baseline_feature_indices. This is, this function estimates
y^i = E[Y | do(x^i_s)] := \\int_x_s' E[Y | x^i_s, x_s'] p(x_s') d x_s',
where x^i_s is the i-th sample from baseline_noise_samples, s denotes the baseline_feature_indices and
x_s' ~ X_s' denotes the randomized features that are not in s. For an approximation of the integral, the given
prediction_method is evaluated multiple times for the same x^i_s, but different x_s' ~ X_s'.
:param prediction_method: Prediction method of interest. This should expect a numpy array as input for making
predictions.
:param feature_samples: Samples from the joint distribution. These are used for randomizing the features that are not in
baseline_feature_indices.
:param baseline_samples: Samples for which the marginal expectation should be estimated.
:param baseline_feature_indices: Column indices of the features in s. These values for these features are remain constant
when estimating the expectation.
:param return_averaged_results: If set to True, the expectation over all evaluated samples for the i-th
baseline_noise_samples is returned. If set to False, all corresponding results for the i-th sample are returned.
:param feature_perturbation: Type of feature permutation:
'randomize_columns_independently': Each feature not in s is randomly permuted separately.
'randomize_columns_jointly': All features not in s are jointly permuted. Note that this still represents an
interventional distribution.
:param max_batch_size: Maximum batch size for a estimating the predictions. This has a significant influence on the
overall memory usage. If set to -1, all samples are used in one batch.
:return: If return_averaged_results is False, a numpy array where the i-th entry belongs to the marginal expectation
of x^i_s when randomizing the remaining features.
If return_averaged_results is True, a two dimensional numpy array where the i-th entry contains all
predictions for x^i_s when randomizing the remaining features.
"""
feature_samples, baseline_samples = shape_into_2d(feature_samples, baseline_samples)
batch_size = baseline_samples.shape[0] if max_batch_size == -1 else max_batch_size
result = [np.nan] * baseline_samples.shape[0]
# Make copy to avoid manipulating the original matrix.
feature_samples = np.array(feature_samples)
features_to_randomize = np.delete(np.arange(0, feature_samples.shape[1]), baseline_feature_indices)
if feature_perturbation == 'randomize_columns_independently':
feature_samples = permute_features(feature_samples, features_to_randomize, False)
elif feature_perturbation == 'randomize_columns_jointly':
feature_samples = permute_features(feature_samples, features_to_randomize, True)
else:
raise ValueError("Unknown argument %s as feature_perturbation type!" % feature_perturbation)
# The given prediction method has to be evaluated multiple times on a large amount of different inputs. Typically,
# the batch evaluation of a prediction model on multiple inputs at the same time is significantly faster
# than evaluating it on single simples in a for-loop. To make use of this, we try to evaluate as many samples as
# possible in one batch call of the prediction method. However, this also requires a lot of memory for many samples.
# To overcome potential memory issues, multiple batch calls are performed, each with at most batch_size many
# samples. The number of samples that are evaluated is normally
# baseline_noise_samples.shape[0] * feature_samples.shape[0]. Here, we reduce it to
# batch_size * feature_samples.shape[0]. If the batch_size would be set 1, then each baseline_noise_samples is
# evaluated one by one in a for-loop.
inputs = repmat(feature_samples, batch_size, 1)
for offset in range(0, baseline_samples.shape[0], batch_size):
# Each batch consist of at most batch_size * feature_samples.shape[0] many samples. If there are multiple
# batches, the offset indicates the index of the current baseline_noise_samples that has not been evaluated yet.
if offset + batch_size > baseline_samples.shape[0]:
# If the batch size would be larger than the remaining amount of samples, it is reduced to only include the
# remaining baseline_noise_samples.
adjusted_batch_size = baseline_samples.shape[0] - offset
inputs = inputs[:adjusted_batch_size * feature_samples.shape[0]]
else:
adjusted_batch_size = batch_size
for index in range(adjusted_batch_size):
# The inputs consist of batch_size many copies of feature_samples. Here, we set the columns of the features
# in baseline_feature_indices to their respective values in baseline_noise_samples.
inputs[index * feature_samples.shape[0]:(index + 1) * feature_samples.shape[0],
baseline_feature_indices] = baseline_samples[offset + index, baseline_feature_indices]
# After creating the (potentially large) input data matrix, we can evaluate the prediction method.
predictions = np.array(prediction_method(inputs))
for index in range(adjusted_batch_size):
# Here, offset + index now indicates the sample index in baseline_noise_samples.
if return_averaged_results:
# This would average all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples. This is, y^(offset + index) = E[Y | do(x^(offset + index)_s)].
result[offset + index] = np.mean(predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]], axis=0)
else:
# This would return all prediction results obtained for the 'offset + index'-th sample in
# baseline_noise_samples, i.e. the results are not averaged.
result[offset + index] = predictions[index * feature_samples.shape[0]:
(index + 1) * feature_samples.shape[0]]
return np.array(result)
def permute_features(feature_samples: np.ndarray,
features_to_permute: Union[List[int], np.ndarray],
randomize_features_jointly: bool) -> np.ndarray:
# Making copy to ensure that the original object is not modified.
feature_samples = np.array(feature_samples)
if randomize_features_jointly:
# Permute samples jointly. This still represents an interventional distribution.
feature_samples[:, features_to_permute] \
= feature_samples[np.random.choice(feature_samples.shape[0],
feature_samples.shape[0],
replace=False)][:, features_to_permute]
else:
# Permute samples independently.
for feature in features_to_permute:
np.random.shuffle(feature_samples[:, feature])
return feature_samples
def estimate_ftest_pvalue(X_training_a: np.ndarray,
X_training_b: np.ndarray,
Y_training: np.ndarray,
X_test_a: np.ndarray,
X_test_b: np.ndarray,
Y_test: np.ndarray) -> float:
"""Estimates the p-value for the null hypothesis that the same regression error with less parameters can be
achieved. This is, a linear model trained on a data set A with d number of features has the same performance
(in terms of squared error) relative to the number of features as a model trained on a data set B with k number
features, where k < d. Here, both data sets need to have the same target values. A small p-value would
indicate that the model performances are significantly different.
Note that all given test samples are utilized in the f-test.
See https://en.wikipedia.org/wiki/F-test#Regression_problems for more details.
:param X_training_a: Input training samples for model A.
:param X_training_b: Input training samples for model B. These samples should have less features than samples in X_training_a.
:param Y_training: Target training values.
:param X_test_a: Test samples for model A.
:param X_test_b: Test samples for model B.
:param Y_test: Test values.
:return: A p-value on [0, 1].
"""
X_training_a, X_test_a = shape_into_2d(X_training_a, X_test_a)
if X_training_b.size > 0:
X_training_b, X_test_b = shape_into_2d(X_training_b, X_test_b)
else:
X_training_b = X_training_b.reshape(0, 0)
X_test_b = X_test_b.reshape(0, 0)
if X_training_a.shape[1] <= X_training_b.shape[1]:
raise ValueError("The data X_training_a should have more dimensions (model parameters) than the data "
"X_training_b!")
ssr_a = np.sum(
(Y_test - LinearRegression().fit(X_training_a, Y_training).predict(X_test_a)) ** 2)
if X_training_b.shape[1] > 0:
ssr_b = np.sum(
(Y_test - LinearRegression().fit(X_training_b, Y_training).predict(X_test_b)) ** 2)
else:
ssr_b = np.sum((Y_test - np.mean(Y_training)) ** 2)
dof_diff_1 = (X_test_a.shape[1] - X_test_b.shape[1]) # p1 - p2
dof_diff_2 = (X_test_a.shape[0] - X_test_a.shape[1] - 1) # n - p2 (parameters include intercept)
f_statistic = (ssr_b - ssr_a) / dof_diff_1 * dof_diff_2
if ssr_a < EPS:
ssr_a = 0
if ssr_b < EPS:
ssr_b = 0
if ssr_a == 0 and ssr_b == 0:
f_statistic = 0
elif ssr_a != 0:
f_statistic /= ssr_a
return stats.f.sf(f_statistic, dof_diff_1, dof_diff_2)
| bloebp | 2b4832a91e7ab54d31b116d19958fddecc2c8510 | fe22abab886c5aea768b8474295999a71d914be9 | This else condition is the scenario where we don't have input features, i.e. empty conditioning set. Then we have: ```E[Y | {}] = E[Y]```. Here, ```Y``` is based on ```Y_training```. Therefore, the prediction model we are looking at here is ```f({}) = E[Y | {}] = E[Y]``` and, thus, the sum of squared error in the test data is ```sum_i (y_i - f({}))**2 = sum_i (y_i - E[Y])**2``` | bloebp | 366 |
py-why/dowhy | 561 | Auto generate documentation | This PR introduces a GitHub action to auto-generate versioned documentation and publish it to py-why.github.io/dowhy.
To facilitate automatically generating documentation on every commit to master, this PR introduces multiple things (separated by commits):
- It introduces [versioneer](https://github.com/python-versioneer/python-versioneer), a convenient tool to use git tags for versioning. Otherwise, keeping a master version of the docs separate from the latest release version is difficult: master would always list the latest version in its `VERSION` file, such as e.g. `0.7.1` and then documentation for master branch would still say v0.7.1 and easily confused with the real v0.7.1 documentation. Versioneer is also used by NumPy and Pandas.
- It introduces [sphinx_multiversion](https://holzhaus.github.io/sphinx-multiversion/master/index.html), a Sphinx extension to generate multiple versions of the documentation based on different branches and tags of the git repo. To be precise, it uses a [slightly changed version of it](https://github.com/Holzhaus/sphinx-multiversion/compare/master...petergtz:sphinx-multiversion:override-version-with-refname), which uses the git refname for the version, not falling back to a hard-coded value in `conf.py`. For now, I'm hosting [this variant in my own GitHub account](https://github.com/petergtz/sphinx-multiversion/tree/override-version-with-refname), but my plan is to open a PR to upstream and introduce this as a new feature.
- It introduce a new `generate_docs.sh` script as a replacement for the `Makefile`. This script invokes sphinx_multiversion instead of vanilla sphinx and it also creates an `index.html` which automatically redirects to the latest stable version of the documentation.
- It introduces a GitHub action workflow that builds the versioned documentation and publishes it. This action is based on the container image [pego/dowhy-examples-notebooks-deps](https://hub.docker.com/repository/docker/pego/dowhy-examples-notebooks-deps) which is automatically built from [this Dockerfile](https://github.com/petergtz/dowhy-examples-notebooks-deps-dockerfile/blob/main/Dockerfile), which makes sure all necessary R dependencies are available. Instead of this additional Docker image, we could have installed all necessary dependencies in each run of the documentation build workflow, but that is both a waste of resources and also increasing the build time immensely. I suggest we move this repo into PyWhy and eventually create a docker hub account for PyWhy, or alternatively push the image into GitHub's own registry.
- Finally, it makes the `str_to_dot` function more robust towards slightly changing inputs which is probably due to slight changes in graphviz libraries, which introduced additional whitespace in graph descriptions.
**Notes:**
- An example of this specific GitHub action in action can be found at:
https://github.com/petergtz/dowhy/actions/workflows/build-and-publish-docs.yml
- An example of the versioned documentation can be found at:
https://petergtz.github.io/dowhy
- `versioneer.py` and `dowhy/_version.py` do not have to be reviewed. Versioneer generates these files when invoking its install script. | null | 2022-07-19 20:38:04+00:00 | 2022-07-28 10:10:47+00:00 | setup.py | """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| petergtz | 570bc06baddc7c29b233e6afbb156f58ae9ae200 | 9a025a2003d754aa9749e8569b72dc566a17c3c2 | This is cool. A few questions:
1. for a new release, where do we specify the new release version so that versioneer fills this correctly?
2. for an update to the master docs, I'm guessing no change is needed in the version? | amit-sharma | 367 |
py-why/dowhy | 561 | Auto generate documentation | This PR introduces a GitHub action to auto-generate versioned documentation and publish it to py-why.github.io/dowhy.
To facilitate automatically generating documentation on every commit to master, this PR introduces multiple things (separated by commits):
- It introduces [versioneer](https://github.com/python-versioneer/python-versioneer), a convenient tool to use git tags for versioning. Otherwise, keeping a master version of the docs separate from the latest release version is difficult: master would always list the latest version in its `VERSION` file, such as e.g. `0.7.1` and then documentation for master branch would still say v0.7.1 and easily confused with the real v0.7.1 documentation. Versioneer is also used by NumPy and Pandas.
- It introduces [sphinx_multiversion](https://holzhaus.github.io/sphinx-multiversion/master/index.html), a Sphinx extension to generate multiple versions of the documentation based on different branches and tags of the git repo. To be precise, it uses a [slightly changed version of it](https://github.com/Holzhaus/sphinx-multiversion/compare/master...petergtz:sphinx-multiversion:override-version-with-refname), which uses the git refname for the version, not falling back to a hard-coded value in `conf.py`. For now, I'm hosting [this variant in my own GitHub account](https://github.com/petergtz/sphinx-multiversion/tree/override-version-with-refname), but my plan is to open a PR to upstream and introduce this as a new feature.
- It introduce a new `generate_docs.sh` script as a replacement for the `Makefile`. This script invokes sphinx_multiversion instead of vanilla sphinx and it also creates an `index.html` which automatically redirects to the latest stable version of the documentation.
- It introduces a GitHub action workflow that builds the versioned documentation and publishes it. This action is based on the container image [pego/dowhy-examples-notebooks-deps](https://hub.docker.com/repository/docker/pego/dowhy-examples-notebooks-deps) which is automatically built from [this Dockerfile](https://github.com/petergtz/dowhy-examples-notebooks-deps-dockerfile/blob/main/Dockerfile), which makes sure all necessary R dependencies are available. Instead of this additional Docker image, we could have installed all necessary dependencies in each run of the documentation build workflow, but that is both a waste of resources and also increasing the build time immensely. I suggest we move this repo into PyWhy and eventually create a docker hub account for PyWhy, or alternatively push the image into GitHub's own registry.
- Finally, it makes the `str_to_dot` function more robust towards slightly changing inputs which is probably due to slight changes in graphviz libraries, which introduced additional whitespace in graph descriptions.
**Notes:**
- An example of this specific GitHub action in action can be found at:
https://github.com/petergtz/dowhy/actions/workflows/build-and-publish-docs.yml
- An example of the versioned documentation can be found at:
https://petergtz.github.io/dowhy
- `versioneer.py` and `dowhy/_version.py` do not have to be reviewed. Versioneer generates these files when invoking its install script. | null | 2022-07-19 20:38:04+00:00 | 2022-07-28 10:10:47+00:00 | setup.py | """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| petergtz | 570bc06baddc7c29b233e6afbb156f58ae9ae200 | 9a025a2003d754aa9749e8569b72dc566a17c3c2 | > 1. for a new release, where do we specify the new release version so that versioneer fills this correctly?
You specify the version via `git tag`
See https://github.com/python-versioneer/python-versioneer/blob/master/INSTALL.md#post-installation-usage, which says:
> If you distribute your project through PyPI, then the release process should boil down to two steps:
>
> - 1: git tag 1.0
> - 2: python setup.py register sdist upload
> 2. for an update to the master docs, I'm guessing no change is needed in the version?
Correct. Pushing to `master`/`main` will modify the documentation, whereas the documentation for a specific release is frozen.
It's important to realize that this gives us slightly less flexibility in terms of updating the docs for an already _released_ version. The only way I see how this could work is by creating a branch on the released-tagged commit and retagging a new commit with the same tag (basically moving the tag to another commit and forcing-pushing that change). It's a bit dirty, but would probably work. I'd prefer a more aggressive release frequency though. | petergtz | 368 |
py-why/dowhy | 561 | Auto generate documentation | This PR introduces a GitHub action to auto-generate versioned documentation and publish it to py-why.github.io/dowhy.
To facilitate automatically generating documentation on every commit to master, this PR introduces multiple things (separated by commits):
- It introduces [versioneer](https://github.com/python-versioneer/python-versioneer), a convenient tool to use git tags for versioning. Otherwise, keeping a master version of the docs separate from the latest release version is difficult: master would always list the latest version in its `VERSION` file, such as e.g. `0.7.1` and then documentation for master branch would still say v0.7.1 and easily confused with the real v0.7.1 documentation. Versioneer is also used by NumPy and Pandas.
- It introduces [sphinx_multiversion](https://holzhaus.github.io/sphinx-multiversion/master/index.html), a Sphinx extension to generate multiple versions of the documentation based on different branches and tags of the git repo. To be precise, it uses a [slightly changed version of it](https://github.com/Holzhaus/sphinx-multiversion/compare/master...petergtz:sphinx-multiversion:override-version-with-refname), which uses the git refname for the version, not falling back to a hard-coded value in `conf.py`. For now, I'm hosting [this variant in my own GitHub account](https://github.com/petergtz/sphinx-multiversion/tree/override-version-with-refname), but my plan is to open a PR to upstream and introduce this as a new feature.
- It introduce a new `generate_docs.sh` script as a replacement for the `Makefile`. This script invokes sphinx_multiversion instead of vanilla sphinx and it also creates an `index.html` which automatically redirects to the latest stable version of the documentation.
- It introduces a GitHub action workflow that builds the versioned documentation and publishes it. This action is based on the container image [pego/dowhy-examples-notebooks-deps](https://hub.docker.com/repository/docker/pego/dowhy-examples-notebooks-deps) which is automatically built from [this Dockerfile](https://github.com/petergtz/dowhy-examples-notebooks-deps-dockerfile/blob/main/Dockerfile), which makes sure all necessary R dependencies are available. Instead of this additional Docker image, we could have installed all necessary dependencies in each run of the documentation build workflow, but that is both a waste of resources and also increasing the build time immensely. I suggest we move this repo into PyWhy and eventually create a docker hub account for PyWhy, or alternatively push the image into GitHub's own registry.
- Finally, it makes the `str_to_dot` function more robust towards slightly changing inputs which is probably due to slight changes in graphviz libraries, which introduced additional whitespace in graph descriptions.
**Notes:**
- An example of this specific GitHub action in action can be found at:
https://github.com/petergtz/dowhy/actions/workflows/build-and-publish-docs.yml
- An example of the versioned documentation can be found at:
https://petergtz.github.io/dowhy
- `versioneer.py` and `dowhy/_version.py` do not have to be reviewed. Versioneer generates these files when invoking its install script. | null | 2022-07-19 20:38:04+00:00 | 2022-07-28 10:10:47+00:00 | setup.py | """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| petergtz | 570bc06baddc7c29b233e6afbb156f58ae9ae200 | 9a025a2003d754aa9749e8569b72dc566a17c3c2 | interesting, this automation will be useful. just to confirm: previously, I used to update the version manually in dowhy/version.py, and then create a release using the github UI (that created a tag and automatically [pypi publish workflow](https://github.com/py-why/dowhy/blob/main/.github/workflows/python-publish.yml)).
Now, the steps will be:
1. create the release using the github UI (which also creates a tag)
2. let the pypi publish workflow run automatically.
In this case, would versioneer automatically pick the latest version number?
For older docs: As I mentioned on the other PR, I think it is okay to keep the docs only for v0.7 or v0.8 onwards. They are backwards-compatible and we can encourage people to update. | amit-sharma | 369 |
py-why/dowhy | 561 | Auto generate documentation | This PR introduces a GitHub action to auto-generate versioned documentation and publish it to py-why.github.io/dowhy.
To facilitate automatically generating documentation on every commit to master, this PR introduces multiple things (separated by commits):
- It introduces [versioneer](https://github.com/python-versioneer/python-versioneer), a convenient tool to use git tags for versioning. Otherwise, keeping a master version of the docs separate from the latest release version is difficult: master would always list the latest version in its `VERSION` file, such as e.g. `0.7.1` and then documentation for master branch would still say v0.7.1 and easily confused with the real v0.7.1 documentation. Versioneer is also used by NumPy and Pandas.
- It introduces [sphinx_multiversion](https://holzhaus.github.io/sphinx-multiversion/master/index.html), a Sphinx extension to generate multiple versions of the documentation based on different branches and tags of the git repo. To be precise, it uses a [slightly changed version of it](https://github.com/Holzhaus/sphinx-multiversion/compare/master...petergtz:sphinx-multiversion:override-version-with-refname), which uses the git refname for the version, not falling back to a hard-coded value in `conf.py`. For now, I'm hosting [this variant in my own GitHub account](https://github.com/petergtz/sphinx-multiversion/tree/override-version-with-refname), but my plan is to open a PR to upstream and introduce this as a new feature.
- It introduce a new `generate_docs.sh` script as a replacement for the `Makefile`. This script invokes sphinx_multiversion instead of vanilla sphinx and it also creates an `index.html` which automatically redirects to the latest stable version of the documentation.
- It introduces a GitHub action workflow that builds the versioned documentation and publishes it. This action is based on the container image [pego/dowhy-examples-notebooks-deps](https://hub.docker.com/repository/docker/pego/dowhy-examples-notebooks-deps) which is automatically built from [this Dockerfile](https://github.com/petergtz/dowhy-examples-notebooks-deps-dockerfile/blob/main/Dockerfile), which makes sure all necessary R dependencies are available. Instead of this additional Docker image, we could have installed all necessary dependencies in each run of the documentation build workflow, but that is both a waste of resources and also increasing the build time immensely. I suggest we move this repo into PyWhy and eventually create a docker hub account for PyWhy, or alternatively push the image into GitHub's own registry.
- Finally, it makes the `str_to_dot` function more robust towards slightly changing inputs which is probably due to slight changes in graphviz libraries, which introduced additional whitespace in graph descriptions.
**Notes:**
- An example of this specific GitHub action in action can be found at:
https://github.com/petergtz/dowhy/actions/workflows/build-and-publish-docs.yml
- An example of the versioned documentation can be found at:
https://petergtz.github.io/dowhy
- `versioneer.py` and `dowhy/_version.py` do not have to be reviewed. Versioneer generates these files when invoking its install script. | null | 2022-07-19 20:38:04+00:00 | 2022-07-28 10:10:47+00:00 | setup.py | """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| petergtz | 570bc06baddc7c29b233e6afbb156f58ae9ae200 | 9a025a2003d754aa9749e8569b72dc566a17c3c2 | > Now, the steps will be:
>
> 1. create the release using the github UI (which also creates a tag)
> 2. let the pypi publish workflow run automatically.
>
> In this case, would versioneer automatically pick the latest version number?
Yes. Internally, versioneer uses a specially parameterized `git describe` invocation to "heuristically" find the _latest release_ tag. And I assume when GitHub UI creates the version tag, it will have to push this tag and that push will also trigger the GitHub action to build and deploy the docs and then versioneer will pick this up.
> For older docs: As I mentioned on the other PR, I think it is okay to keep the docs only for v0.7 or v0.8 onwards. They are backwards-compatible and we can encourage people to update.
Makes sense. | petergtz | 370 |
py-why/dowhy | 561 | Auto generate documentation | This PR introduces a GitHub action to auto-generate versioned documentation and publish it to py-why.github.io/dowhy.
To facilitate automatically generating documentation on every commit to master, this PR introduces multiple things (separated by commits):
- It introduces [versioneer](https://github.com/python-versioneer/python-versioneer), a convenient tool to use git tags for versioning. Otherwise, keeping a master version of the docs separate from the latest release version is difficult: master would always list the latest version in its `VERSION` file, such as e.g. `0.7.1` and then documentation for master branch would still say v0.7.1 and easily confused with the real v0.7.1 documentation. Versioneer is also used by NumPy and Pandas.
- It introduces [sphinx_multiversion](https://holzhaus.github.io/sphinx-multiversion/master/index.html), a Sphinx extension to generate multiple versions of the documentation based on different branches and tags of the git repo. To be precise, it uses a [slightly changed version of it](https://github.com/Holzhaus/sphinx-multiversion/compare/master...petergtz:sphinx-multiversion:override-version-with-refname), which uses the git refname for the version, not falling back to a hard-coded value in `conf.py`. For now, I'm hosting [this variant in my own GitHub account](https://github.com/petergtz/sphinx-multiversion/tree/override-version-with-refname), but my plan is to open a PR to upstream and introduce this as a new feature.
- It introduce a new `generate_docs.sh` script as a replacement for the `Makefile`. This script invokes sphinx_multiversion instead of vanilla sphinx and it also creates an `index.html` which automatically redirects to the latest stable version of the documentation.
- It introduces a GitHub action workflow that builds the versioned documentation and publishes it. This action is based on the container image [pego/dowhy-examples-notebooks-deps](https://hub.docker.com/repository/docker/pego/dowhy-examples-notebooks-deps) which is automatically built from [this Dockerfile](https://github.com/petergtz/dowhy-examples-notebooks-deps-dockerfile/blob/main/Dockerfile), which makes sure all necessary R dependencies are available. Instead of this additional Docker image, we could have installed all necessary dependencies in each run of the documentation build workflow, but that is both a waste of resources and also increasing the build time immensely. I suggest we move this repo into PyWhy and eventually create a docker hub account for PyWhy, or alternatively push the image into GitHub's own registry.
- Finally, it makes the `str_to_dot` function more robust towards slightly changing inputs which is probably due to slight changes in graphviz libraries, which introduced additional whitespace in graph descriptions.
**Notes:**
- An example of this specific GitHub action in action can be found at:
https://github.com/petergtz/dowhy/actions/workflows/build-and-publish-docs.yml
- An example of the versioned documentation can be found at:
https://petergtz.github.io/dowhy
- `versioneer.py` and `dowhy/_version.py` do not have to be reviewed. Versioneer generates these files when invoking its install script. | null | 2022-07-19 20:38:04+00:00 | 2022-07-28 10:10:47+00:00 | setup.py | """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| petergtz | 570bc06baddc7c29b233e6afbb156f58ae9ae200 | 9a025a2003d754aa9749e8569b72dc566a17c3c2 | Oh, wait a second. Let me actually check the other workflows. To see how this would work. Sorry, I didn't realize PyPI publication also happens through a workflow. | petergtz | 371 |
py-why/dowhy | 561 | Auto generate documentation | This PR introduces a GitHub action to auto-generate versioned documentation and publish it to py-why.github.io/dowhy.
To facilitate automatically generating documentation on every commit to master, this PR introduces multiple things (separated by commits):
- It introduces [versioneer](https://github.com/python-versioneer/python-versioneer), a convenient tool to use git tags for versioning. Otherwise, keeping a master version of the docs separate from the latest release version is difficult: master would always list the latest version in its `VERSION` file, such as e.g. `0.7.1` and then documentation for master branch would still say v0.7.1 and easily confused with the real v0.7.1 documentation. Versioneer is also used by NumPy and Pandas.
- It introduces [sphinx_multiversion](https://holzhaus.github.io/sphinx-multiversion/master/index.html), a Sphinx extension to generate multiple versions of the documentation based on different branches and tags of the git repo. To be precise, it uses a [slightly changed version of it](https://github.com/Holzhaus/sphinx-multiversion/compare/master...petergtz:sphinx-multiversion:override-version-with-refname), which uses the git refname for the version, not falling back to a hard-coded value in `conf.py`. For now, I'm hosting [this variant in my own GitHub account](https://github.com/petergtz/sphinx-multiversion/tree/override-version-with-refname), but my plan is to open a PR to upstream and introduce this as a new feature.
- It introduce a new `generate_docs.sh` script as a replacement for the `Makefile`. This script invokes sphinx_multiversion instead of vanilla sphinx and it also creates an `index.html` which automatically redirects to the latest stable version of the documentation.
- It introduces a GitHub action workflow that builds the versioned documentation and publishes it. This action is based on the container image [pego/dowhy-examples-notebooks-deps](https://hub.docker.com/repository/docker/pego/dowhy-examples-notebooks-deps) which is automatically built from [this Dockerfile](https://github.com/petergtz/dowhy-examples-notebooks-deps-dockerfile/blob/main/Dockerfile), which makes sure all necessary R dependencies are available. Instead of this additional Docker image, we could have installed all necessary dependencies in each run of the documentation build workflow, but that is both a waste of resources and also increasing the build time immensely. I suggest we move this repo into PyWhy and eventually create a docker hub account for PyWhy, or alternatively push the image into GitHub's own registry.
- Finally, it makes the `str_to_dot` function more robust towards slightly changing inputs which is probably due to slight changes in graphviz libraries, which introduced additional whitespace in graph descriptions.
**Notes:**
- An example of this specific GitHub action in action can be found at:
https://github.com/petergtz/dowhy/actions/workflows/build-and-publish-docs.yml
- An example of the versioned documentation can be found at:
https://petergtz.github.io/dowhy
- `versioneer.py` and `dowhy/_version.py` do not have to be reviewed. Versioneer generates these files when invoking its install script. | null | 2022-07-19 20:38:04+00:00 | 2022-07-28 10:10:47+00:00 | setup.py | """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| """A setuptools based setup module for dowhy.
Adapted from:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
# Get the required packages
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
install_requires = f.read().splitlines()
# Plotting packages are optional to install
extras = ["plotting"]
extras_require = dict()
for e in extras:
req_file = "requirements-{0}.txt".format(e)
with open(req_file) as f:
extras_require[e] = [line.strip() for line in f]
setup(
name='dowhy',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions.', # Required
license='MIT',
long_description=long_description,
url='https://github.com/microsoft/dowhy', # Optional
download_url='https://github.com/microsoft/dowhy/archive/v0.8.tar.gz',
author='Amit Sharma, Emre Kiciman',
classifiers=[ # Optional
'Development Status :: 4 - Beta',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
keywords='causality machine-learning causal-inference statistics graphical-model',
packages=find_packages(exclude=['docs', 'tests']),
python_requires='>=3.6',
install_requires=install_requires,
extras_require=extras_require,
include_package_data=True,
package_data={'dowhy':['VERSION']}
)
| petergtz | 570bc06baddc7c29b233e6afbb156f58ae9ae200 | 9a025a2003d754aa9749e8569b72dc566a17c3c2 | Okay, yes, this should work. The moment you use the GitHub UI to create the release and the git tag, it will push this tag and that will trigger the publish workflow which then just relies on the usual `python setup.py sdist bdist_wheel` call which uses versioneer to choose the current/latest version from the just-created git tag. | petergtz | 372 |
py-why/dowhy | 549 | Algorithms for efficient adjustment (backdoor) sets | Implements the algorithms discussed [here](https://github.com/py-why/dowhy/issues/464).
I added the new algorithms [here](https://github.com/py-why/dowhy/pull/549/files#diff-cb99b05726571b36d65d193004d3c854cbf055d86eadd94dc1ce81bc4ac7b4b4), and made a few (small I think) changes to the CausalIdentifier class. I added tests for the new identifier methods, essentially covering a number of graphs that appeared in our papers on these algorithms. This is still missing updates to the docs.
I think feedback at this point would be super helpful :)
I also have a few questions and comments:
- Do the already implemented backdoor methods support multivariate treatments and/or outcomes? I ask this because the newly added algorithms are designed to work for univariate treatment and outcome only.
- I'm currently raising a value error when no observable backdoor exists; is this how you handle this situation? I also raise value error when the (sufficient) conditions that we need to ensure the existence of an optimal efficient backdoor do not hold.
- Computing the minimum cost efficient adjustment set of course requires the user passing the costs associated with nodes. This is done when calling the identify_ate_effect method. If no costs are passed, they are assume to be constant and equal to one, and hence the optimal minimum cost backdoor set is the optimal backdoor set of minimum cardinality. I wonder if the user should be warned about this, or putting it in the docs is enough.
- The algorithms admit a set of conditional variables, that could be used to decide treatment (individualised treatment rules). This is also passed when calling the identify_ate_effect method, does that choice seem right?
- The new files I created I passed through black to pass flake8, but I didn't do this with causal_identifier.py because there were pre-existing flake8 problems and I didn't want to make so many chances to the file at once.
| null | 2022-07-16 14:24:08+00:00 | 2022-08-08 04:09:13+00:00 | dowhy/causal_identifier.py | import copy
import itertools
import logging
import sympy as sp
import sympy.stats as spstats
import dowhy.utils.cli_helpers as cli
from dowhy.utils.api import parse_state
class CausalIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
"""
NONPARAMETRIC_ATE="nonparametric-ate"
NONPARAMETRIC_NDE="nonparametric-nde"
NONPARAMETRIC_NIE="nonparametric-nie"
MAX_BACKDOOR_ITERATIONS = 100000
# Backdoor method names
BACKDOOR_DEFAULT="default"
BACKDOOR_EXHAUSTIVE="exhaustive-search"
BACKDOOR_MIN="minimal-adjustment"
BACKDOOR_MAX="maximal-adjustment"
METHOD_NAMES = {BACKDOOR_DEFAULT, BACKDOOR_EXHAUSTIVE, BACKDOOR_MIN, BACKDOOR_MAX}
DEFAULT_BACKDOOR_METHOD = BACKDOOR_DEFAULT
def __init__(self, graph, estimand_type,
method_name = "default",
proceed_when_unidentifiable=False):
self._graph = graph
self.estimand_type = estimand_type
self.treatment_name = graph.treatment_name
self.outcome_name = graph.outcome_name
self.method_name = method_name
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.logger = logging.getLogger(__name__)
def identify_effect(self, optimize_backdoor=False):
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param self: instance of the CausalIdentifier class (or its subclass)
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
# First, check if there is a directed path from action to outcome
if not self._graph.has_directed_path(self.treatment_name, self.outcome_name):
self.logger.warn("No directed path from treatment to outcome. Causal Effect is zero.")
return IdentifiedEstimand(self,
treatment_variable=self.treatment_name,
outcome_variable=self.outcome_name,
no_directed_path=True)
if self.estimand_type == CausalIdentifier.NONPARAMETRIC_ATE:
return self.identify_ate_effect(optimize_backdoor=optimize_backdoor)
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return self.identify_nde_effect()
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return self.identify_nie_effect()
else:
raise ValueError("Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
CausalIdentifier.NONPARAMETRIC_ATE,
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE))
def identify_ate_effect(self, optimize_backdoor):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
else:
from dowhy.causal_identifiers.backdoor import Backdoor
path = Backdoor(self._graph._graph, self.treatment_name, self.outcome_name)
backdoor_sets = path.get_backdoor_vars()
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name,
self.outcome_name,
backdoor_sets,
estimands_dict)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = self._graph.get_instruments(self.treatment_name,
self.outcome_name)
self.logger.info("Instrumental variables for treatment and outcome:" +
str(instrument_names))
if len(instrument_names) > 0:
iv_estimand_expr = self.construct_iv_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
instrument_names
)
self.logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = self.identify_frontdoor()
self.logger.info("Frontdoor variables for treatment and outcome:" +
str(frontdoor_variables_names))
if len(frontdoor_variables_names) >0:
frontdoor_estimand_expr = self.construct_frontdoor_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
frontdoor_variables_names
)
self.logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(self.treatment_name, frontdoor_variables_names)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(frontdoor_variables_names, self.outcome_name)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id = default_backdoor_id
)
return estimand
def identify_nie_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name,
self.outcome_name,
backdoor_sets,
estimands_dict)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {} # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" +
str(mediators_names))
if len(mediators_names) >0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(self.treatment_name, mediators_names)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(mediators_names, self.outcome_name)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id = None
)
return estimand
def identify_nde_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name,
self.outcome_name,
backdoor_sets,
estimands_dict)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" +
str(mediators_names))
if len(mediators_names) >0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(self.treatment_name, mediators_names)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(mediators_names, self.outcome_name)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id = None
)
return estimand
def identify_backdoor(self, treatment_name, outcome_name,
include_unobserved=False, dseparation_algo="default"):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = self._graph.get_backdoor_paths(treatment_name, outcome_name)
elif dseparation_algo == "default":
bdoor_graph = self._graph.do_surgery(treatment_name,
remove_outgoing_edges=True)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
method_name = self.method_name if self.method_name != CausalIdentifier.BACKDOOR_DEFAULT else CausalIdentifier.DEFAULT_BACKDOOR_METHOD
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = self._graph.check_valid_backdoor_set(treatment_name,
outcome_name, empty_set,
backdoor_paths=backdoor_paths, new_graph=bdoor_graph,
dseparation_algo=dseparation_algo)
if check["is_dseparated"]:
backdoor_sets.append({'backdoor_set':empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if method_name == CausalIdentifier.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = self._graph.get_all_nodes(include_unobserved=include_unobserved) \
- set(treatment_name) \
- set(outcome_name)
eligible_variables -= self._graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = self._graph.check_dseparation(
treatment_name, parse_state(var),
set())
dsep_outcome_var = self._graph.check_dseparation(
outcome_name, parse_state(var), set())
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if method_name in CausalIdentifier.METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = self.find_valid_adjustment_sets(
treatment_name, outcome_name,
backdoor_paths, bdoor_graph,
dseparation_algo,
backdoor_sets, filt_eligible_variables,
method_name=method_name,
max_iterations= CausalIdentifier.MAX_BACKDOOR_ITERATIONS)
if method_name == CausalIdentifier.BACKDOOR_DEFAULT and found_valid_adjustment_set:
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = self.find_valid_adjustment_sets(
treatment_name, outcome_name,
backdoor_paths, bdoor_graph,
dseparation_algo,
backdoor_sets, filt_eligible_variables,
method_name=CausalIdentifier.BACKDOOR_MIN,
max_iterations= CausalIdentifier.MAX_BACKDOOR_ITERATIONS)
else:
raise ValueError(f"Identifier method {method_name} not supported. Try one of the following: {CausalIdentifier.METHOD_NAMES}")
return backdoor_sets
def find_valid_adjustment_sets(self, treatment_name, outcome_name,
backdoor_paths, bdoor_graph, dseparation_algo,
backdoor_sets, filt_eligible_variables,
method_name, max_iterations):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = self._graph.all_observed(self._graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = range(1, len(filt_eligible_variables) + 1, 1) if method_name == CausalIdentifier.BACKDOOR_MIN else range(len(filt_eligible_variables), 0, -1)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(filt_eligible_variables, size_candidate_set):
check = self._graph.check_valid_backdoor_set(treatment_name,
outcome_name, candidate_set,
backdoor_paths=backdoor_paths,
new_graph = bdoor_graph,
dseparation_algo = dseparation_algo)
self.logger.debug("Candidate backdoor set: {0}, is_dseparated: {1}".format(candidate_set, check["is_dseparated"]))
if check["is_dseparated"]:
backdoor_sets.append({'backdoor_set': candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if method_name == CausalIdentifier.BACKDOOR_EXHAUSTIVE and num_iterations > max_iterations:
self.logger.warning(f"Max number of iterations {max_iterations} reached.")
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if method_name in {CausalIdentifier.BACKDOOR_DEFAULT, CausalIdentifier.BACKDOOR_MAX, CausalIdentifier.BACKDOOR_MIN} and found_valid_adjustment_set:
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if method_name in {CausalIdentifier.BACKDOOR_DEFAULT, CausalIdentifier.BACKDOOR_MAX} and all_nodes_observed:
break
if num_iterations > max_iterations:
self.logger.warning(f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set.")
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(self, backdoor_sets_dict):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(self._graph.get_instruments(self.treatment_name, self.outcome_name))
iv_count_dict = {key: len(set(bdoor_set).intersection(instrument_names)) for key, bdoor_set in backdoor_sets_dict.items()}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count}
min_iv_backdoor_sets_dict = {key: backdoor_sets_dict[key] for key in min_iv_keys}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(self, treatment_name, outcome_name,
backdoor_sets, estimands_dict, proceed_when_unidentifiable=None):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed.
"""
backdoor_variables_dict = {}
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
is_identified = [ self._graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets ]
if any(is_identified):
self.logger.info("Causal effect can be identified.")
backdoor_sets_arr = [list(
bset["backdoor_set"])
for bset in backdoor_sets
if self._graph.all_observed(bset["backdoor_set"]) ]
else: # there is unobserved confounding
self.logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = self.construct_backdoor_estimand(
self.estimand_type, treatment_name,
outcome_name, backdoor_sets_arr[i])
self.logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor"+str(i+1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor"+str(i+1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(self, dseparation_algo="default"):
""" Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = self._graph.do_surgery(self.treatment_name,
remove_incoming_edges=True)
bdoor_graph1 = self._graph.do_surgery(self.treatment_name,
remove_outgoing_edges=True)
elif dseparation_algo == "naive":
frontdoor_paths = self._graph.get_all_directed_paths(self.treatment_name, self.outcome_name)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
eligible_variables = self._graph.get_descendants(self.treatment_name) \
- set(self.outcome_name) \
- set(self._graph.get_descendants(self.outcome_name))
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = self._graph.check_valid_frontdoor_set(
self.treatment_name, self.outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo)
self.logger.debug("Candidate frontdoor set: {0}, is_dseparated: {1}".format(candidate_var, cond1))
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = self._graph.check_valid_backdoor_set(
self.treatment_name, parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph= bdoor_graph1,
dseparation_algo=dseparation_algo)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = self._graph.do_surgery(candidate_var,
remove_outgoing_edges=True)
cond3 = self._graph.check_valid_backdoor_set(
parse_state(candidate_var), self.outcome_name,
self.treatment_name,
backdoor_paths=None,
new_graph= bdoor_graph2,
dseparation_algo=dseparation_algo)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(self):
""" Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = self._graph.get_all_directed_paths(self.treatment_name, self.outcome_name)
eligible_variables = self._graph.get_descendants(self.treatment_name) \
- set(self.outcome_name)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = self._graph.check_valid_mediation_set(self.treatment_name,
self.outcome_name, parse_state(candidate_var), mediation_paths=mediation_paths)
self.logger.debug("Candidate mediation set: {0}, on_mediating_path: {1}".format(candidate_var, is_valid_mediation))
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
return None
def identify_mediation_first_stage_confounders(self, treatment_name, mediators_names):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(treatment_name, mediators_names)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(self, mediators_names, outcome_name):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(mediators_names, outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def construct_backdoor_estimand(self, estimand_type, treatment_name,
outcome_name, common_causes):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes)>0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
'Unconfoundedness': (
u"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
def construct_iv_estimand(self, estimand_type, treatment_name,
outcome_name, instrument_names):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative / sym_treatment_derivative)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
u"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then "
u"\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(",".join(instrument_names), ",".join(treatment_name),
outcome_name)
}
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
def construct_frontdoor_estimand(self, estimand_type, treatment_name,
outcome_name, frontdoor_variables_names):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
sym_assumptions = {
"Full-mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1}."
).format(",".join(treatment_name), ",".join(outcome_name), ",".join(frontdoor_variables_names)),
"First-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(frontdoor_variables_names))
}
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
def construct_mediation_estimand(self, estimand_type, treatment_name,
outcome_name, mediators_names):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (CausalIdentifier.NONPARAMETRIC_NDE, CausalIdentifier.NONPARAMETRIC_NIE):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names)>0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(sym_conditional_outcome, sym_treatment)
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(",".join(treatment_name), ",".join(outcome_name), ",".join(mediators_names)),
"First-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(mediators_names))
}
else:
raise ValueError("Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE))
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
class IdentifiedEstimand:
"""Class for storing a causal estimand, typically as a result of the identification step.
"""
def __init__(self, identifier, treatment_variable, outcome_variable,
estimand_type=None, estimands=None,
backdoor_variables=None, instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=None, identifier_method=None,
no_directed_path=False):
self.identifier = identifier
self.treatment_variable = parse_state(treatment_variable)
self.outcome_variable = parse_state(outcome_variable)
self.backdoor_variables = backdoor_variables
self.instrumental_variables = parse_state(instrumental_variables)
self.frontdoor_variables = parse_state(frontdoor_variables)
self.mediator_variables = parse_state(mediator_variables)
self.mediation_first_stage_confounders=mediation_first_stage_confounders
self.mediation_second_stage_confounders=mediation_second_stage_confounders
self.estimand_type = estimand_type
self.estimands = estimands
self.default_backdoor_id = default_backdoor_id
self.identifier_method = identifier_method
self.no_directed_path = no_directed_path
def set_identifier_method(self, identifier_name):
self.identifier_method = identifier_name
def get_backdoor_variables(self, key=None):
""" Return a list containing the backdoor variables.
If the calling estimator method is a backdoor method, return the
backdoor variables corresponding to its target estimand.
Otherwise, return the backdoor variables for the default backdoor estimand.
"""
if key is None:
if self.identifier_method and self.identifier_method.startswith("backdoor"):
return self.backdoor_variables[self.identifier_method]
elif self.backdoor_variables is not None and len(self.backdoor_variables) > 0:
return self.backdoor_variables[self.default_backdoor_id]
else:
return []
else:
return self.backdoor_variables[key]
def set_backdoor_variables(self, bdoor_variables_arr, key=None):
if key is None:
key = self.identifier_method
self.backdoor_variables[key] = bdoor_variables_arr
def get_frontdoor_variables(self):
"""Return a list containing the frontdoor variables (if present)
"""
return self.frontdoor_variables
def get_mediator_variables(self):
"""Return a list containing the mediator variables (if present)
"""
return self.mediator_variables
def get_instrumental_variables(self):
"""Return a list containing the instrumental variables (if present)
"""
return self.instrumental_variables
def __deepcopy__(self, memo):
return IdentifiedEstimand(
self.identifier, # not deep copied
copy.deepcopy(self.treatment_variable),
copy.deepcopy(self.outcome_variable),
estimand_type=copy.deepcopy(self.estimand_type),
estimands=copy.deepcopy(self.estimands),
backdoor_variables=copy.deepcopy(self.backdoor_variables),
instrumental_variables=copy.deepcopy(self.instrumental_variables),
frontdoor_variables=copy.deepcopy(self.frontdoor_variables),
mediator_variables=copy.deepcopy(self.mediator_variables),
default_backdoor_id=copy.deepcopy(self.default_backdoor_id),
identifier_method=copy.deepcopy(self.identifier_method)
)
def __str__(self, only_target_estimand=False, show_all_backdoor_sets=False):
if self.no_directed_path:
s = "No directed path from {0} to {1} in the causal graph.".format(
self.treatment_variable,
self.outcome_variable)
s += "\nCausal effect is zero."
return s
s = "Estimand type: {0}\n".format(self.estimand_type)
i = 1
has_valid_backdoor = sum("backdoor" in key for key in self.estimands.keys())
for k, v in self.estimands.items():
if show_all_backdoor_sets:
# Do not show backdoor key unless it is the only backdoor set.
if k == "backdoor" and has_valid_backdoor > 1:
continue
else:
# Just show the default backdoor set
if k.startswith("backdoor") and k != "backdoor":
continue
if only_target_estimand and k != self.identifier_method:
continue
s += "\n### Estimand : {0}\n".format(i)
s += "Estimand name: {0}".format(k)
if k == self.default_backdoor_id:
s += " (Default)"
s += "\n"
if v is None:
s += "No such variable(s) found!\n"
else:
sp_expr_str = sp.pretty(v["estimand"], use_unicode=True)
s += "Estimand expression:\n{0}\n".format(sp_expr_str)
j = 1
for ass_name, ass_str in v["assumptions"].items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
i += 1
return s
| import copy
import itertools
import logging
import sympy as sp
import sympy.stats as spstats
import dowhy.utils.cli_helpers as cli
from dowhy.causal_identifiers.efficient_backdoor import EfficientBackdoor
from dowhy.utils.api import parse_state
class CausalIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
"""
NONPARAMETRIC_ATE = "nonparametric-ate"
NONPARAMETRIC_NDE = "nonparametric-nde"
NONPARAMETRIC_NIE = "nonparametric-nie"
MAX_BACKDOOR_ITERATIONS = 100000
# Backdoor method names
BACKDOOR_DEFAULT = "default"
BACKDOOR_EXHAUSTIVE = "exhaustive-search"
BACKDOOR_MIN = "minimal-adjustment"
BACKDOOR_MAX = "maximal-adjustment"
BACKDOOR_EFFICIENT = "efficient-adjustment"
BACKDOOR_MIN_EFFICIENT = "efficient-minimal-adjustment"
BACKDOOR_MINCOST_EFFICIENT = "efficient-mincost-adjustment"
METHOD_NAMES = {
BACKDOOR_DEFAULT,
BACKDOOR_EXHAUSTIVE,
BACKDOOR_MIN,
BACKDOOR_MAX,
BACKDOOR_EFFICIENT,
BACKDOOR_MIN_EFFICIENT,
BACKDOOR_MINCOST_EFFICIENT,
}
EFFICIENT_METHODS = {
BACKDOOR_EFFICIENT,
BACKDOOR_MIN_EFFICIENT,
BACKDOOR_MINCOST_EFFICIENT,
}
DEFAULT_BACKDOOR_METHOD = BACKDOOR_DEFAULT
def __init__(
self,
graph,
estimand_type,
method_name="default",
proceed_when_unidentifiable=False,
):
self._graph = graph
self.estimand_type = estimand_type
self.treatment_name = graph.treatment_name
self.outcome_name = graph.outcome_name
self.method_name = method_name
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.logger = logging.getLogger(__name__)
def identify_effect(
self, optimize_backdoor=False, costs=None, conditional_node_names=None
):
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param self: instance of the CausalIdentifier class (or its subclass)
:param optimize_backdoor: if True, uses an optimised algorithm to compute the backdoor sets
:param costs: non-negative costs associated with variables in the graph. Only used
for estimand_type='non-parametric-ate' and method_name='efficient-mincost-adjustment'. If
no costs are provided by the user, and method_name='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph.
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention is static.
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
# First, check if there is a directed path from action to outcome
if not self._graph.has_directed_path(self.treatment_name, self.outcome_name):
self.logger.warn(
"No directed path from treatment to outcome. Causal Effect is zero."
)
return IdentifiedEstimand(
self,
treatment_variable=self.treatment_name,
outcome_variable=self.outcome_name,
no_directed_path=True,
)
if self.estimand_type == CausalIdentifier.NONPARAMETRIC_ATE:
return self.identify_ate_effect(
optimize_backdoor=optimize_backdoor,
costs=costs,
conditional_node_names=conditional_node_names,
)
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return self.identify_nde_effect()
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return self.identify_nie_effect()
else:
raise ValueError(
"Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
CausalIdentifier.NONPARAMETRIC_ATE,
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE,
)
)
def identify_ate_effect(
self, optimize_backdoor, costs=None, conditional_node_names=None
):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# Pick algorithm to compute backdoor sets according to method chosen
if self.method_name not in CausalIdentifier.EFFICIENT_METHODS:
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = self.identify_backdoor(
self.treatment_name, self.outcome_name
)
else:
from dowhy.causal_identifiers.backdoor import Backdoor
path = Backdoor(
self._graph._graph, self.treatment_name, self.outcome_name
)
backdoor_sets = path.get_backdoor_vars()
elif self.method_name in CausalIdentifier.EFFICIENT_METHODS:
backdoor_sets = self.identify_efficient_backdoor(
costs=costs, conditional_node_names=conditional_node_names
)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name, self.outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(
str(default_backdoor_id), None
)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = self._graph.get_instruments(
self.treatment_name, self.outcome_name
)
self.logger.info(
"Instrumental variables for treatment and outcome:" + str(instrument_names)
)
if len(instrument_names) > 0:
iv_estimand_expr = self.construct_iv_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
instrument_names,
)
self.logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = self.identify_frontdoor()
self.logger.info(
"Frontdoor variables for treatment and outcome:"
+ str(frontdoor_variables_names)
)
if len(frontdoor_variables_names) > 0:
frontdoor_estimand_expr = self.construct_frontdoor_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
frontdoor_variables_names,
)
self.logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(
self.treatment_name, frontdoor_variables_names
)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(
frontdoor_variables_names, self.outcome_name
)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_nie_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name, self.outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = (
{}
) # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names,
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(
self.treatment_name, mediators_names
)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(
mediators_names, self.outcome_name
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_nde_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name, self.outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names,
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(
self.treatment_name, mediators_names
)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(
mediators_names, self.outcome_name
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_backdoor(
self,
treatment_name,
outcome_name,
include_unobserved=False,
dseparation_algo="default",
):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = self._graph.get_backdoor_paths(
treatment_name, outcome_name
)
elif dseparation_algo == "default":
bdoor_graph = self._graph.do_surgery(
treatment_name, remove_outgoing_edges=True
)
else:
raise ValueError(
f"d-separation algorithm {dseparation_algo} is not supported"
)
method_name = (
self.method_name
if self.method_name != CausalIdentifier.BACKDOOR_DEFAULT
else CausalIdentifier.DEFAULT_BACKDOOR_METHOD
)
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = self._graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
empty_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if method_name == CausalIdentifier.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = (
self._graph.get_all_nodes(include_unobserved=include_unobserved)
- set(treatment_name)
- set(outcome_name)
)
eligible_variables -= self._graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = self._graph.check_dseparation(
treatment_name, parse_state(var), set()
)
dsep_outcome_var = self._graph.check_dseparation(
outcome_name, parse_state(var), set()
)
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if method_name in CausalIdentifier.METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = self.find_valid_adjustment_sets(
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
method_name=method_name,
max_iterations=CausalIdentifier.MAX_BACKDOOR_ITERATIONS,
)
if (
method_name == CausalIdentifier.BACKDOOR_DEFAULT
and found_valid_adjustment_set
):
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = self.find_valid_adjustment_sets(
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
method_name=CausalIdentifier.BACKDOOR_MIN,
max_iterations=CausalIdentifier.MAX_BACKDOOR_ITERATIONS,
)
else:
raise ValueError(
f"Identifier method {method_name} not supported. Try one of the following: {CausalIdentifier.METHOD_NAMES}"
)
return backdoor_sets
def identify_efficient_backdoor(self, costs=None, conditional_node_names=None):
"""Method implementing algorithms to compute efficient backdoor sets, as
described in Rotnitzky and Smucler (2020), Smucler, Sapienza and Rotnitzky (2021)
and Smucler and Rotnitzky (2022).
For method_name='efficient-adjustment', computes an optimal backdoor set,
that is, a backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable backdoor sets. This optimal backdoor
set always exists when no variables are latent, and the algorithm is guaranteed to compute
it in this case. Under a non-parametric graphical model with latent variables,
such a backdoor set can fail to exist. When certain sufficient conditions under which it is
known that such a backdoor set exists are not satisfied, an error is raised.
For method_name='efficient-minimal-adjustment', computes an optimal minimal backdoor set,
that is, a minimal backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimal backdoor sets.
For method_name='efficient-mincost-adjustment', computes an optimal minimum cost backdoor set,
that is, a minimum cost backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimum cost backdoor sets. The cost
of a backdoor set is defined as the sum of the costs of the variables that comprise it.
The various optimal backdoor sets computed by this method are not only optimal under
non-parametric graphical models and non-parametric estimators of interventional mean,
but also under linear graphical models and OLS estimators, per results in Henckel, Perkovic
and Maathuis (2020).
:param costs: a list with non-negative costs associated with variables in the graph. Only used
for estimatand_type='non-parametric-ate' and method_name='efficient-mincost-adjustment'. If
not costs are provided by the user, and method_name='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph. The structure of the list should
be of the form [(node, {"cost": x}) for node in nodes].
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention sets the treatment to a constant.
:returns: backdoor_sets, a list of dictionaries, with each dictionary
having as values a backdoor set.
"""
if costs is None and self.method_name == "efficient-mincost-adjustment":
self.logger.warning(
"No costs were passed, so they will be assumed to be constant and equal to 1."
)
efficient_bd = EfficientBackdoor(
graph=self._graph,
conditional_node_names=conditional_node_names,
costs=costs,
)
if self.method_name == "efficient-adjustment":
backdoor_set = efficient_bd.optimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif self.method_name == "efficient-minimal-adjustment":
backdoor_set = efficient_bd.optimal_minimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif self.method_name == "efficient-mincost-adjustment":
backdoor_set = efficient_bd.optimal_mincost_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
return backdoor_sets
def find_valid_adjustment_sets(
self,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
method_name,
max_iterations,
):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = self._graph.all_observed(self._graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = (
range(1, len(filt_eligible_variables) + 1, 1)
if method_name == CausalIdentifier.BACKDOOR_MIN
else range(len(filt_eligible_variables), 0, -1)
)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(
filt_eligible_variables, size_candidate_set
):
check = self._graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
candidate_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
self.logger.debug(
"Candidate backdoor set: {0}, is_dseparated: {1}".format(
candidate_set, check["is_dseparated"]
)
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if (
method_name == CausalIdentifier.BACKDOOR_EXHAUSTIVE
and num_iterations > max_iterations
):
self.logger.warning(
f"Max number of iterations {max_iterations} reached."
)
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if (
method_name
in {
CausalIdentifier.BACKDOOR_DEFAULT,
CausalIdentifier.BACKDOOR_MAX,
CausalIdentifier.BACKDOOR_MIN,
}
and found_valid_adjustment_set
):
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if (
method_name
in {CausalIdentifier.BACKDOOR_DEFAULT, CausalIdentifier.BACKDOOR_MAX}
and all_nodes_observed
):
break
if num_iterations > max_iterations:
self.logger.warning(
f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set."
)
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(self, backdoor_sets_dict):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(
self._graph.get_instruments(self.treatment_name, self.outcome_name)
)
iv_count_dict = {
key: len(set(bdoor_set).intersection(instrument_names))
for key, bdoor_set in backdoor_sets_dict.items()
}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {
key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count
}
min_iv_backdoor_sets_dict = {
key: backdoor_sets_dict[key] for key in min_iv_keys
}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(
self,
treatment_name,
outcome_name,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=None,
):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed.
"""
backdoor_variables_dict = {}
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
is_identified = [
self._graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets
]
if any(is_identified):
self.logger.info("Causal effect can be identified.")
backdoor_sets_arr = [
list(bset["backdoor_set"])
for bset in backdoor_sets
if self._graph.all_observed(bset["backdoor_set"])
]
else: # there is unobserved confounding
self.logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = self.construct_backdoor_estimand(
self.estimand_type, treatment_name, outcome_name, backdoor_sets_arr[i]
)
self.logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor" + str(i + 1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor" + str(i + 1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(self, dseparation_algo="default"):
""" Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = self._graph.do_surgery(
self.treatment_name, remove_incoming_edges=True
)
bdoor_graph1 = self._graph.do_surgery(
self.treatment_name, remove_outgoing_edges=True
)
elif dseparation_algo == "naive":
frontdoor_paths = self._graph.get_all_directed_paths(
self.treatment_name, self.outcome_name
)
else:
raise ValueError(
f"d-separation algorithm {dseparation_algo} is not supported"
)
eligible_variables = (
self._graph.get_descendants(self.treatment_name)
- set(self.outcome_name)
- set(self._graph.get_descendants(self.outcome_name))
)
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = self._graph.check_valid_frontdoor_set(
self.treatment_name,
self.outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo,
)
self.logger.debug(
"Candidate frontdoor set: {0}, is_dseparated: {1}".format(
candidate_var, cond1
)
)
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = self._graph.check_valid_backdoor_set(
self.treatment_name,
parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph=bdoor_graph1,
dseparation_algo=dseparation_algo,
)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = self._graph.do_surgery(
candidate_var, remove_outgoing_edges=True
)
cond3 = self._graph.check_valid_backdoor_set(
parse_state(candidate_var),
self.outcome_name,
self.treatment_name,
backdoor_paths=None,
new_graph=bdoor_graph2,
dseparation_algo=dseparation_algo,
)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(self):
""" Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = self._graph.get_all_directed_paths(
self.treatment_name, self.outcome_name
)
eligible_variables = self._graph.get_descendants(self.treatment_name) - set(
self.outcome_name
)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = self._graph.check_valid_mediation_set(
self.treatment_name,
self.outcome_name,
parse_state(candidate_var),
mediation_paths=mediation_paths,
)
self.logger.debug(
"Candidate mediation set: {0}, on_mediating_path: {1}".format(
candidate_var, is_valid_mediation
)
)
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
return None
def identify_mediation_first_stage_confounders(
self, treatment_name, mediators_names
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(treatment_name, mediators_names)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(
self, mediators_names, outcome_name
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(mediators_names, outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
return backdoor_variables_dict
def construct_backdoor_estimand(
self, estimand_type, treatment_name, outcome_name, common_causes
):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes) > 0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
"Unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_iv_estimand(
self, estimand_type, treatment_name, outcome_name, instrument_names
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(
sym_outcome_derivative / sym_treatment_derivative
)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then "
"\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(
",".join(instrument_names), ",".join(treatment_name), outcome_name
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_frontdoor_estimand(
self, estimand_type, treatment_name, outcome_name, frontdoor_variables_names
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(
sym_treatment_derivative * sym_outcome_derivative
)
sym_assumptions = {
"Full-mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(frontdoor_variables_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name),
outcome_name,
",".join(frontdoor_variables_names),
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_mediation_estimand(
self, estimand_type, treatment_name, outcome_name, mediators_names
):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE,
):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names) > 0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(
sym_conditional_outcome, sym_treatment
)
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(
sym_treatment_derivative * sym_outcome_derivative
)
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(mediators_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name), outcome_name, ",".join(mediators_names)
),
}
else:
raise ValueError(
"Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE,
)
)
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
class IdentifiedEstimand:
"""Class for storing a causal estimand, typically as a result of the identification step.
"""
def __init__(
self,
identifier,
treatment_variable,
outcome_variable,
estimand_type=None,
estimands=None,
backdoor_variables=None,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=None,
identifier_method=None,
no_directed_path=False,
):
self.identifier = identifier
self.treatment_variable = parse_state(treatment_variable)
self.outcome_variable = parse_state(outcome_variable)
self.backdoor_variables = backdoor_variables
self.instrumental_variables = parse_state(instrumental_variables)
self.frontdoor_variables = parse_state(frontdoor_variables)
self.mediator_variables = parse_state(mediator_variables)
self.mediation_first_stage_confounders = mediation_first_stage_confounders
self.mediation_second_stage_confounders = mediation_second_stage_confounders
self.estimand_type = estimand_type
self.estimands = estimands
self.default_backdoor_id = default_backdoor_id
self.identifier_method = identifier_method
self.no_directed_path = no_directed_path
def set_identifier_method(self, identifier_name):
self.identifier_method = identifier_name
def get_backdoor_variables(self, key=None):
""" Return a list containing the backdoor variables.
If the calling estimator method is a backdoor method, return the
backdoor variables corresponding to its target estimand.
Otherwise, return the backdoor variables for the default backdoor estimand.
"""
if key is None:
if self.identifier_method and self.identifier_method.startswith("backdoor"):
return self.backdoor_variables[self.identifier_method]
elif (
self.backdoor_variables is not None and len(self.backdoor_variables) > 0
):
return self.backdoor_variables[self.default_backdoor_id]
else:
return []
else:
return self.backdoor_variables[key]
def set_backdoor_variables(self, bdoor_variables_arr, key=None):
if key is None:
key = self.identifier_method
self.backdoor_variables[key] = bdoor_variables_arr
def get_frontdoor_variables(self):
"""Return a list containing the frontdoor variables (if present)
"""
return self.frontdoor_variables
def get_mediator_variables(self):
"""Return a list containing the mediator variables (if present)
"""
return self.mediator_variables
def get_instrumental_variables(self):
"""Return a list containing the instrumental variables (if present)
"""
return self.instrumental_variables
def __deepcopy__(self, memo):
return IdentifiedEstimand(
self.identifier, # not deep copied
copy.deepcopy(self.treatment_variable),
copy.deepcopy(self.outcome_variable),
estimand_type=copy.deepcopy(self.estimand_type),
estimands=copy.deepcopy(self.estimands),
backdoor_variables=copy.deepcopy(self.backdoor_variables),
instrumental_variables=copy.deepcopy(self.instrumental_variables),
frontdoor_variables=copy.deepcopy(self.frontdoor_variables),
mediator_variables=copy.deepcopy(self.mediator_variables),
default_backdoor_id=copy.deepcopy(self.default_backdoor_id),
identifier_method=copy.deepcopy(self.identifier_method),
)
def __str__(self, only_target_estimand=False, show_all_backdoor_sets=False):
if self.no_directed_path:
s = "No directed path from {0} to {1} in the causal graph.".format(
self.treatment_variable, self.outcome_variable
)
s += "\nCausal effect is zero."
return s
s = "Estimand type: {0}\n".format(self.estimand_type)
i = 1
has_valid_backdoor = sum("backdoor" in key for key in self.estimands.keys())
for k, v in self.estimands.items():
if show_all_backdoor_sets:
# Do not show backdoor key unless it is the only backdoor set.
if k == "backdoor" and has_valid_backdoor > 1:
continue
else:
# Just show the default backdoor set
if k.startswith("backdoor") and k != "backdoor":
continue
if only_target_estimand and k != self.identifier_method:
continue
s += "\n### Estimand : {0}\n".format(i)
s += "Estimand name: {0}".format(k)
if k == self.default_backdoor_id:
s += " (Default)"
s += "\n"
if v is None:
s += "No such variable(s) found!\n"
else:
sp_expr_str = sp.pretty(v["estimand"], use_unicode=True)
s += "Estimand expression:\n{0}\n".format(sp_expr_str)
j = 1
for ass_name, ass_str in v["assumptions"].items():
s += "Estimand assumption {0}, {1}: {2}\n".format(
j, ass_name, ass_str
)
j += 1
i += 1
return s
| esmucler | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | 824f5532c0644812867bb018bf22585f2466960c | I'm curious---what are these conditional_node_names graphically? are they parents of the treatment only? | amit-sharma | 373 |
py-why/dowhy | 549 | Algorithms for efficient adjustment (backdoor) sets | Implements the algorithms discussed [here](https://github.com/py-why/dowhy/issues/464).
I added the new algorithms [here](https://github.com/py-why/dowhy/pull/549/files#diff-cb99b05726571b36d65d193004d3c854cbf055d86eadd94dc1ce81bc4ac7b4b4), and made a few (small I think) changes to the CausalIdentifier class. I added tests for the new identifier methods, essentially covering a number of graphs that appeared in our papers on these algorithms. This is still missing updates to the docs.
I think feedback at this point would be super helpful :)
I also have a few questions and comments:
- Do the already implemented backdoor methods support multivariate treatments and/or outcomes? I ask this because the newly added algorithms are designed to work for univariate treatment and outcome only.
- I'm currently raising a value error when no observable backdoor exists; is this how you handle this situation? I also raise value error when the (sufficient) conditions that we need to ensure the existence of an optimal efficient backdoor do not hold.
- Computing the minimum cost efficient adjustment set of course requires the user passing the costs associated with nodes. This is done when calling the identify_ate_effect method. If no costs are passed, they are assume to be constant and equal to one, and hence the optimal minimum cost backdoor set is the optimal backdoor set of minimum cardinality. I wonder if the user should be warned about this, or putting it in the docs is enough.
- The algorithms admit a set of conditional variables, that could be used to decide treatment (individualised treatment rules). This is also passed when calling the identify_ate_effect method, does that choice seem right?
- The new files I created I passed through black to pass flake8, but I didn't do this with causal_identifier.py because there were pre-existing flake8 problems and I didn't want to make so many chances to the file at once.
| null | 2022-07-16 14:24:08+00:00 | 2022-08-08 04:09:13+00:00 | dowhy/causal_identifier.py | import copy
import itertools
import logging
import sympy as sp
import sympy.stats as spstats
import dowhy.utils.cli_helpers as cli
from dowhy.utils.api import parse_state
class CausalIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
"""
NONPARAMETRIC_ATE="nonparametric-ate"
NONPARAMETRIC_NDE="nonparametric-nde"
NONPARAMETRIC_NIE="nonparametric-nie"
MAX_BACKDOOR_ITERATIONS = 100000
# Backdoor method names
BACKDOOR_DEFAULT="default"
BACKDOOR_EXHAUSTIVE="exhaustive-search"
BACKDOOR_MIN="minimal-adjustment"
BACKDOOR_MAX="maximal-adjustment"
METHOD_NAMES = {BACKDOOR_DEFAULT, BACKDOOR_EXHAUSTIVE, BACKDOOR_MIN, BACKDOOR_MAX}
DEFAULT_BACKDOOR_METHOD = BACKDOOR_DEFAULT
def __init__(self, graph, estimand_type,
method_name = "default",
proceed_when_unidentifiable=False):
self._graph = graph
self.estimand_type = estimand_type
self.treatment_name = graph.treatment_name
self.outcome_name = graph.outcome_name
self.method_name = method_name
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.logger = logging.getLogger(__name__)
def identify_effect(self, optimize_backdoor=False):
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param self: instance of the CausalIdentifier class (or its subclass)
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
# First, check if there is a directed path from action to outcome
if not self._graph.has_directed_path(self.treatment_name, self.outcome_name):
self.logger.warn("No directed path from treatment to outcome. Causal Effect is zero.")
return IdentifiedEstimand(self,
treatment_variable=self.treatment_name,
outcome_variable=self.outcome_name,
no_directed_path=True)
if self.estimand_type == CausalIdentifier.NONPARAMETRIC_ATE:
return self.identify_ate_effect(optimize_backdoor=optimize_backdoor)
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return self.identify_nde_effect()
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return self.identify_nie_effect()
else:
raise ValueError("Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
CausalIdentifier.NONPARAMETRIC_ATE,
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE))
def identify_ate_effect(self, optimize_backdoor):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
else:
from dowhy.causal_identifiers.backdoor import Backdoor
path = Backdoor(self._graph._graph, self.treatment_name, self.outcome_name)
backdoor_sets = path.get_backdoor_vars()
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name,
self.outcome_name,
backdoor_sets,
estimands_dict)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = self._graph.get_instruments(self.treatment_name,
self.outcome_name)
self.logger.info("Instrumental variables for treatment and outcome:" +
str(instrument_names))
if len(instrument_names) > 0:
iv_estimand_expr = self.construct_iv_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
instrument_names
)
self.logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = self.identify_frontdoor()
self.logger.info("Frontdoor variables for treatment and outcome:" +
str(frontdoor_variables_names))
if len(frontdoor_variables_names) >0:
frontdoor_estimand_expr = self.construct_frontdoor_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
frontdoor_variables_names
)
self.logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(self.treatment_name, frontdoor_variables_names)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(frontdoor_variables_names, self.outcome_name)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id = default_backdoor_id
)
return estimand
def identify_nie_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name,
self.outcome_name,
backdoor_sets,
estimands_dict)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {} # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" +
str(mediators_names))
if len(mediators_names) >0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(self.treatment_name, mediators_names)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(mediators_names, self.outcome_name)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id = None
)
return estimand
def identify_nde_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name,
self.outcome_name,
backdoor_sets,
estimands_dict)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" +
str(mediators_names))
if len(mediators_names) >0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(self.treatment_name, mediators_names)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(mediators_names, self.outcome_name)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id = None
)
return estimand
def identify_backdoor(self, treatment_name, outcome_name,
include_unobserved=False, dseparation_algo="default"):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = self._graph.get_backdoor_paths(treatment_name, outcome_name)
elif dseparation_algo == "default":
bdoor_graph = self._graph.do_surgery(treatment_name,
remove_outgoing_edges=True)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
method_name = self.method_name if self.method_name != CausalIdentifier.BACKDOOR_DEFAULT else CausalIdentifier.DEFAULT_BACKDOOR_METHOD
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = self._graph.check_valid_backdoor_set(treatment_name,
outcome_name, empty_set,
backdoor_paths=backdoor_paths, new_graph=bdoor_graph,
dseparation_algo=dseparation_algo)
if check["is_dseparated"]:
backdoor_sets.append({'backdoor_set':empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if method_name == CausalIdentifier.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = self._graph.get_all_nodes(include_unobserved=include_unobserved) \
- set(treatment_name) \
- set(outcome_name)
eligible_variables -= self._graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = self._graph.check_dseparation(
treatment_name, parse_state(var),
set())
dsep_outcome_var = self._graph.check_dseparation(
outcome_name, parse_state(var), set())
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if method_name in CausalIdentifier.METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = self.find_valid_adjustment_sets(
treatment_name, outcome_name,
backdoor_paths, bdoor_graph,
dseparation_algo,
backdoor_sets, filt_eligible_variables,
method_name=method_name,
max_iterations= CausalIdentifier.MAX_BACKDOOR_ITERATIONS)
if method_name == CausalIdentifier.BACKDOOR_DEFAULT and found_valid_adjustment_set:
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = self.find_valid_adjustment_sets(
treatment_name, outcome_name,
backdoor_paths, bdoor_graph,
dseparation_algo,
backdoor_sets, filt_eligible_variables,
method_name=CausalIdentifier.BACKDOOR_MIN,
max_iterations= CausalIdentifier.MAX_BACKDOOR_ITERATIONS)
else:
raise ValueError(f"Identifier method {method_name} not supported. Try one of the following: {CausalIdentifier.METHOD_NAMES}")
return backdoor_sets
def find_valid_adjustment_sets(self, treatment_name, outcome_name,
backdoor_paths, bdoor_graph, dseparation_algo,
backdoor_sets, filt_eligible_variables,
method_name, max_iterations):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = self._graph.all_observed(self._graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = range(1, len(filt_eligible_variables) + 1, 1) if method_name == CausalIdentifier.BACKDOOR_MIN else range(len(filt_eligible_variables), 0, -1)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(filt_eligible_variables, size_candidate_set):
check = self._graph.check_valid_backdoor_set(treatment_name,
outcome_name, candidate_set,
backdoor_paths=backdoor_paths,
new_graph = bdoor_graph,
dseparation_algo = dseparation_algo)
self.logger.debug("Candidate backdoor set: {0}, is_dseparated: {1}".format(candidate_set, check["is_dseparated"]))
if check["is_dseparated"]:
backdoor_sets.append({'backdoor_set': candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if method_name == CausalIdentifier.BACKDOOR_EXHAUSTIVE and num_iterations > max_iterations:
self.logger.warning(f"Max number of iterations {max_iterations} reached.")
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if method_name in {CausalIdentifier.BACKDOOR_DEFAULT, CausalIdentifier.BACKDOOR_MAX, CausalIdentifier.BACKDOOR_MIN} and found_valid_adjustment_set:
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if method_name in {CausalIdentifier.BACKDOOR_DEFAULT, CausalIdentifier.BACKDOOR_MAX} and all_nodes_observed:
break
if num_iterations > max_iterations:
self.logger.warning(f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set.")
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(self, backdoor_sets_dict):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(self._graph.get_instruments(self.treatment_name, self.outcome_name))
iv_count_dict = {key: len(set(bdoor_set).intersection(instrument_names)) for key, bdoor_set in backdoor_sets_dict.items()}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count}
min_iv_backdoor_sets_dict = {key: backdoor_sets_dict[key] for key in min_iv_keys}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(self, treatment_name, outcome_name,
backdoor_sets, estimands_dict, proceed_when_unidentifiable=None):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed.
"""
backdoor_variables_dict = {}
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
is_identified = [ self._graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets ]
if any(is_identified):
self.logger.info("Causal effect can be identified.")
backdoor_sets_arr = [list(
bset["backdoor_set"])
for bset in backdoor_sets
if self._graph.all_observed(bset["backdoor_set"]) ]
else: # there is unobserved confounding
self.logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = self.construct_backdoor_estimand(
self.estimand_type, treatment_name,
outcome_name, backdoor_sets_arr[i])
self.logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor"+str(i+1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor"+str(i+1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(self, dseparation_algo="default"):
""" Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = self._graph.do_surgery(self.treatment_name,
remove_incoming_edges=True)
bdoor_graph1 = self._graph.do_surgery(self.treatment_name,
remove_outgoing_edges=True)
elif dseparation_algo == "naive":
frontdoor_paths = self._graph.get_all_directed_paths(self.treatment_name, self.outcome_name)
else:
raise ValueError(f"d-separation algorithm {dseparation_algo} is not supported")
eligible_variables = self._graph.get_descendants(self.treatment_name) \
- set(self.outcome_name) \
- set(self._graph.get_descendants(self.outcome_name))
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = self._graph.check_valid_frontdoor_set(
self.treatment_name, self.outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo)
self.logger.debug("Candidate frontdoor set: {0}, is_dseparated: {1}".format(candidate_var, cond1))
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = self._graph.check_valid_backdoor_set(
self.treatment_name, parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph= bdoor_graph1,
dseparation_algo=dseparation_algo)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = self._graph.do_surgery(candidate_var,
remove_outgoing_edges=True)
cond3 = self._graph.check_valid_backdoor_set(
parse_state(candidate_var), self.outcome_name,
self.treatment_name,
backdoor_paths=None,
new_graph= bdoor_graph2,
dseparation_algo=dseparation_algo)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(self):
""" Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = self._graph.get_all_directed_paths(self.treatment_name, self.outcome_name)
eligible_variables = self._graph.get_descendants(self.treatment_name) \
- set(self.outcome_name)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = self._graph.check_valid_mediation_set(self.treatment_name,
self.outcome_name, parse_state(candidate_var), mediation_paths=mediation_paths)
self.logger.debug("Candidate mediation set: {0}, on_mediating_path: {1}".format(candidate_var, is_valid_mediation))
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
return None
def identify_mediation_first_stage_confounders(self, treatment_name, mediators_names):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(treatment_name, mediators_names)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(self, mediators_names, outcome_name):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(mediators_names, outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(str(default_backdoor_id), None)
return backdoor_variables_dict
def construct_backdoor_estimand(self, estimand_type, treatment_name,
outcome_name, common_causes):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes)>0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
'Unconfoundedness': (
u"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
def construct_iv_estimand(self, estimand_type, treatment_name,
outcome_name, instrument_names):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative / sym_treatment_derivative)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
u"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then "
u"\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(",".join(instrument_names), ",".join(treatment_name),
outcome_name)
}
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
def construct_frontdoor_estimand(self, estimand_type, treatment_name,
outcome_name, frontdoor_variables_names):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
sym_assumptions = {
"Full-mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1}."
).format(",".join(treatment_name), ",".join(outcome_name), ",".join(frontdoor_variables_names)),
"First-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(frontdoor_variables_names))
}
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
def construct_mediation_estimand(self, estimand_type, treatment_name,
outcome_name, mediators_names):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (CausalIdentifier.NONPARAMETRIC_NDE, CausalIdentifier.NONPARAMETRIC_NIE):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names)>0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(sym_conditional_outcome, sym_treatment)
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(sym_treatment_derivative * sym_outcome_derivative)
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(",".join(treatment_name), ",".join(outcome_name), ",".join(mediators_names)),
"First-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
u"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(",".join(treatment_name), outcome_name, ",".join(mediators_names))
}
else:
raise ValueError("Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE))
estimand = {
'estimand': sym_effect,
'assumptions': sym_assumptions
}
return estimand
class IdentifiedEstimand:
"""Class for storing a causal estimand, typically as a result of the identification step.
"""
def __init__(self, identifier, treatment_variable, outcome_variable,
estimand_type=None, estimands=None,
backdoor_variables=None, instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=None, identifier_method=None,
no_directed_path=False):
self.identifier = identifier
self.treatment_variable = parse_state(treatment_variable)
self.outcome_variable = parse_state(outcome_variable)
self.backdoor_variables = backdoor_variables
self.instrumental_variables = parse_state(instrumental_variables)
self.frontdoor_variables = parse_state(frontdoor_variables)
self.mediator_variables = parse_state(mediator_variables)
self.mediation_first_stage_confounders=mediation_first_stage_confounders
self.mediation_second_stage_confounders=mediation_second_stage_confounders
self.estimand_type = estimand_type
self.estimands = estimands
self.default_backdoor_id = default_backdoor_id
self.identifier_method = identifier_method
self.no_directed_path = no_directed_path
def set_identifier_method(self, identifier_name):
self.identifier_method = identifier_name
def get_backdoor_variables(self, key=None):
""" Return a list containing the backdoor variables.
If the calling estimator method is a backdoor method, return the
backdoor variables corresponding to its target estimand.
Otherwise, return the backdoor variables for the default backdoor estimand.
"""
if key is None:
if self.identifier_method and self.identifier_method.startswith("backdoor"):
return self.backdoor_variables[self.identifier_method]
elif self.backdoor_variables is not None and len(self.backdoor_variables) > 0:
return self.backdoor_variables[self.default_backdoor_id]
else:
return []
else:
return self.backdoor_variables[key]
def set_backdoor_variables(self, bdoor_variables_arr, key=None):
if key is None:
key = self.identifier_method
self.backdoor_variables[key] = bdoor_variables_arr
def get_frontdoor_variables(self):
"""Return a list containing the frontdoor variables (if present)
"""
return self.frontdoor_variables
def get_mediator_variables(self):
"""Return a list containing the mediator variables (if present)
"""
return self.mediator_variables
def get_instrumental_variables(self):
"""Return a list containing the instrumental variables (if present)
"""
return self.instrumental_variables
def __deepcopy__(self, memo):
return IdentifiedEstimand(
self.identifier, # not deep copied
copy.deepcopy(self.treatment_variable),
copy.deepcopy(self.outcome_variable),
estimand_type=copy.deepcopy(self.estimand_type),
estimands=copy.deepcopy(self.estimands),
backdoor_variables=copy.deepcopy(self.backdoor_variables),
instrumental_variables=copy.deepcopy(self.instrumental_variables),
frontdoor_variables=copy.deepcopy(self.frontdoor_variables),
mediator_variables=copy.deepcopy(self.mediator_variables),
default_backdoor_id=copy.deepcopy(self.default_backdoor_id),
identifier_method=copy.deepcopy(self.identifier_method)
)
def __str__(self, only_target_estimand=False, show_all_backdoor_sets=False):
if self.no_directed_path:
s = "No directed path from {0} to {1} in the causal graph.".format(
self.treatment_variable,
self.outcome_variable)
s += "\nCausal effect is zero."
return s
s = "Estimand type: {0}\n".format(self.estimand_type)
i = 1
has_valid_backdoor = sum("backdoor" in key for key in self.estimands.keys())
for k, v in self.estimands.items():
if show_all_backdoor_sets:
# Do not show backdoor key unless it is the only backdoor set.
if k == "backdoor" and has_valid_backdoor > 1:
continue
else:
# Just show the default backdoor set
if k.startswith("backdoor") and k != "backdoor":
continue
if only_target_estimand and k != self.identifier_method:
continue
s += "\n### Estimand : {0}\n".format(i)
s += "Estimand name: {0}".format(k)
if k == self.default_backdoor_id:
s += " (Default)"
s += "\n"
if v is None:
s += "No such variable(s) found!\n"
else:
sp_expr_str = sp.pretty(v["estimand"], use_unicode=True)
s += "Estimand expression:\n{0}\n".format(sp_expr_str)
j = 1
for ass_name, ass_str in v["assumptions"].items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
i += 1
return s
| import copy
import itertools
import logging
import sympy as sp
import sympy.stats as spstats
import dowhy.utils.cli_helpers as cli
from dowhy.causal_identifiers.efficient_backdoor import EfficientBackdoor
from dowhy.utils.api import parse_state
class CausalIdentifier:
"""Class that implements different identification methods.
Currently supports backdoor and instrumental variable identification methods. The identification is based on the causal graph provided.
"""
NONPARAMETRIC_ATE = "nonparametric-ate"
NONPARAMETRIC_NDE = "nonparametric-nde"
NONPARAMETRIC_NIE = "nonparametric-nie"
MAX_BACKDOOR_ITERATIONS = 100000
# Backdoor method names
BACKDOOR_DEFAULT = "default"
BACKDOOR_EXHAUSTIVE = "exhaustive-search"
BACKDOOR_MIN = "minimal-adjustment"
BACKDOOR_MAX = "maximal-adjustment"
BACKDOOR_EFFICIENT = "efficient-adjustment"
BACKDOOR_MIN_EFFICIENT = "efficient-minimal-adjustment"
BACKDOOR_MINCOST_EFFICIENT = "efficient-mincost-adjustment"
METHOD_NAMES = {
BACKDOOR_DEFAULT,
BACKDOOR_EXHAUSTIVE,
BACKDOOR_MIN,
BACKDOOR_MAX,
BACKDOOR_EFFICIENT,
BACKDOOR_MIN_EFFICIENT,
BACKDOOR_MINCOST_EFFICIENT,
}
EFFICIENT_METHODS = {
BACKDOOR_EFFICIENT,
BACKDOOR_MIN_EFFICIENT,
BACKDOOR_MINCOST_EFFICIENT,
}
DEFAULT_BACKDOOR_METHOD = BACKDOOR_DEFAULT
def __init__(
self,
graph,
estimand_type,
method_name="default",
proceed_when_unidentifiable=False,
):
self._graph = graph
self.estimand_type = estimand_type
self.treatment_name = graph.treatment_name
self.outcome_name = graph.outcome_name
self.method_name = method_name
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self.logger = logging.getLogger(__name__)
def identify_effect(
self, optimize_backdoor=False, costs=None, conditional_node_names=None
):
"""Main method that returns an identified estimand (if one exists).
If estimand_type is non-parametric ATE, then uses backdoor, instrumental variable and frontdoor identification methods, to check if an identified estimand exists, based on the causal graph.
:param self: instance of the CausalIdentifier class (or its subclass)
:param optimize_backdoor: if True, uses an optimised algorithm to compute the backdoor sets
:param costs: non-negative costs associated with variables in the graph. Only used
for estimand_type='non-parametric-ate' and method_name='efficient-mincost-adjustment'. If
no costs are provided by the user, and method_name='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph.
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention is static.
:returns: target estimand, an instance of the IdentifiedEstimand class
"""
# First, check if there is a directed path from action to outcome
if not self._graph.has_directed_path(self.treatment_name, self.outcome_name):
self.logger.warn(
"No directed path from treatment to outcome. Causal Effect is zero."
)
return IdentifiedEstimand(
self,
treatment_variable=self.treatment_name,
outcome_variable=self.outcome_name,
no_directed_path=True,
)
if self.estimand_type == CausalIdentifier.NONPARAMETRIC_ATE:
return self.identify_ate_effect(
optimize_backdoor=optimize_backdoor,
costs=costs,
conditional_node_names=conditional_node_names,
)
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
return self.identify_nde_effect()
elif self.estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
return self.identify_nie_effect()
else:
raise ValueError(
"Estimand type is not supported. Use either {0}, {1}, or {2}.".format(
CausalIdentifier.NONPARAMETRIC_ATE,
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE,
)
)
def identify_ate_effect(
self, optimize_backdoor, costs=None, conditional_node_names=None
):
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
### 1. BACKDOOR IDENTIFICATION
# Pick algorithm to compute backdoor sets according to method chosen
if self.method_name not in CausalIdentifier.EFFICIENT_METHODS:
# First, checking if there are any valid backdoor adjustment sets
if optimize_backdoor == False:
backdoor_sets = self.identify_backdoor(
self.treatment_name, self.outcome_name
)
else:
from dowhy.causal_identifiers.backdoor import Backdoor
path = Backdoor(
self._graph._graph, self.treatment_name, self.outcome_name
)
backdoor_sets = path.get_backdoor_vars()
elif self.method_name in CausalIdentifier.EFFICIENT_METHODS:
backdoor_sets = self.identify_efficient_backdoor(
costs=costs, conditional_node_names=conditional_node_names
)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name, self.outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
if len(backdoor_variables_dict) > 0:
estimands_dict["backdoor"] = estimands_dict.get(
str(default_backdoor_id), None
)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
else:
estimands_dict["backdoor"] = None
### 2. INSTRUMENTAL VARIABLE IDENTIFICATION
# Now checking if there is also a valid iv estimand
instrument_names = self._graph.get_instruments(
self.treatment_name, self.outcome_name
)
self.logger.info(
"Instrumental variables for treatment and outcome:" + str(instrument_names)
)
if len(instrument_names) > 0:
iv_estimand_expr = self.construct_iv_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
instrument_names,
)
self.logger.debug("Identified expression = " + str(iv_estimand_expr))
estimands_dict["iv"] = iv_estimand_expr
else:
estimands_dict["iv"] = None
### 3. FRONTDOOR IDENTIFICATION
# Now checking if there is a valid frontdoor variable
frontdoor_variables_names = self.identify_frontdoor()
self.logger.info(
"Frontdoor variables for treatment and outcome:"
+ str(frontdoor_variables_names)
)
if len(frontdoor_variables_names) > 0:
frontdoor_estimand_expr = self.construct_frontdoor_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
frontdoor_variables_names,
)
self.logger.debug("Identified expression = " + str(frontdoor_estimand_expr))
estimands_dict["frontdoor"] = frontdoor_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(
self.treatment_name, frontdoor_variables_names
)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(
frontdoor_variables_names, self.outcome_name
)
else:
estimands_dict["frontdoor"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=instrument_names,
frontdoor_variables=frontdoor_variables_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=default_backdoor_id,
)
return estimand
def identify_nie_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name, self.outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = (
{}
) # Need to reinitialize this dictionary to avoid including the backdoor sets
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names,
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(
self.treatment_name, mediators_names
)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(
mediators_names, self.outcome_name
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_nde_effect(self):
estimands_dict = {}
### 1. FIRST DOING BACKDOOR IDENTIFICATION
# First, checking if there are any valid backdoor adjustment sets
backdoor_sets = self.identify_backdoor(self.treatment_name, self.outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
self.treatment_name, self.outcome_name, backdoor_sets, estimands_dict
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
### 2. SECOND, CHECKING FOR MEDIATORS
# Now checking if there are valid mediator variables
estimands_dict = {}
mediation_first_stage_confounders = None
mediation_second_stage_confounders = None
mediators_names = self.identify_mediation()
self.logger.info("Mediators for treatment and outcome:" + str(mediators_names))
if len(mediators_names) > 0:
mediation_estimand_expr = self.construct_mediation_estimand(
self.estimand_type,
self._graph.treatment_name,
self._graph.outcome_name,
mediators_names,
)
self.logger.debug("Identified expression = " + str(mediation_estimand_expr))
estimands_dict["mediation"] = mediation_estimand_expr
mediation_first_stage_confounders = self.identify_mediation_first_stage_confounders(
self.treatment_name, mediators_names
)
mediation_second_stage_confounders = self.identify_mediation_second_stage_confounders(
mediators_names, self.outcome_name
)
else:
estimands_dict["mediation"] = None
# Finally returning the estimand object
estimand = IdentifiedEstimand(
self,
treatment_variable=self._graph.treatment_name,
outcome_variable=self._graph.outcome_name,
estimand_type=self.estimand_type,
estimands=estimands_dict,
backdoor_variables=backdoor_variables_dict,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=mediators_names,
mediation_first_stage_confounders=mediation_first_stage_confounders,
mediation_second_stage_confounders=mediation_second_stage_confounders,
default_backdoor_id=None,
)
return estimand
def identify_backdoor(
self,
treatment_name,
outcome_name,
include_unobserved=False,
dseparation_algo="default",
):
backdoor_sets = []
backdoor_paths = None
bdoor_graph = None
if dseparation_algo == "naive":
backdoor_paths = self._graph.get_backdoor_paths(
treatment_name, outcome_name
)
elif dseparation_algo == "default":
bdoor_graph = self._graph.do_surgery(
treatment_name, remove_outgoing_edges=True
)
else:
raise ValueError(
f"d-separation algorithm {dseparation_algo} is not supported"
)
method_name = (
self.method_name
if self.method_name != CausalIdentifier.BACKDOOR_DEFAULT
else CausalIdentifier.DEFAULT_BACKDOOR_METHOD
)
# First, checking if empty set is a valid backdoor set
empty_set = set()
check = self._graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
empty_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": empty_set})
# If the method is `minimal-adjustment`, return the empty set right away.
if method_name == CausalIdentifier.BACKDOOR_MIN:
return backdoor_sets
# Second, checking for all other sets of variables. If include_unobserved is false, then only observed variables are eligible.
eligible_variables = (
self._graph.get_all_nodes(include_unobserved=include_unobserved)
- set(treatment_name)
- set(outcome_name)
)
eligible_variables -= self._graph.get_descendants(treatment_name)
# If var is d-separated from both treatment or outcome, it cannot
# be a part of the backdoor set
filt_eligible_variables = set()
for var in eligible_variables:
dsep_treat_var = self._graph.check_dseparation(
treatment_name, parse_state(var), set()
)
dsep_outcome_var = self._graph.check_dseparation(
outcome_name, parse_state(var), set()
)
if not dsep_outcome_var or not dsep_treat_var:
filt_eligible_variables.add(var)
if method_name in CausalIdentifier.METHOD_NAMES:
backdoor_sets, found_valid_adjustment_set = self.find_valid_adjustment_sets(
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
method_name=method_name,
max_iterations=CausalIdentifier.MAX_BACKDOOR_ITERATIONS,
)
if (
method_name == CausalIdentifier.BACKDOOR_DEFAULT
and found_valid_adjustment_set
):
# repeat the above search with BACKDOOR_MIN
backdoor_sets, _ = self.find_valid_adjustment_sets(
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
method_name=CausalIdentifier.BACKDOOR_MIN,
max_iterations=CausalIdentifier.MAX_BACKDOOR_ITERATIONS,
)
else:
raise ValueError(
f"Identifier method {method_name} not supported. Try one of the following: {CausalIdentifier.METHOD_NAMES}"
)
return backdoor_sets
def identify_efficient_backdoor(self, costs=None, conditional_node_names=None):
"""Method implementing algorithms to compute efficient backdoor sets, as
described in Rotnitzky and Smucler (2020), Smucler, Sapienza and Rotnitzky (2021)
and Smucler and Rotnitzky (2022).
For method_name='efficient-adjustment', computes an optimal backdoor set,
that is, a backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable backdoor sets. This optimal backdoor
set always exists when no variables are latent, and the algorithm is guaranteed to compute
it in this case. Under a non-parametric graphical model with latent variables,
such a backdoor set can fail to exist. When certain sufficient conditions under which it is
known that such a backdoor set exists are not satisfied, an error is raised.
For method_name='efficient-minimal-adjustment', computes an optimal minimal backdoor set,
that is, a minimal backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimal backdoor sets.
For method_name='efficient-mincost-adjustment', computes an optimal minimum cost backdoor set,
that is, a minimum cost backdoor set comprised of observable variables that yields non-parametric
estimators of the interventional mean with the smallest asymptotic variance
among those that are based on observable minimum cost backdoor sets. The cost
of a backdoor set is defined as the sum of the costs of the variables that comprise it.
The various optimal backdoor sets computed by this method are not only optimal under
non-parametric graphical models and non-parametric estimators of interventional mean,
but also under linear graphical models and OLS estimators, per results in Henckel, Perkovic
and Maathuis (2020).
:param costs: a list with non-negative costs associated with variables in the graph. Only used
for estimatand_type='non-parametric-ate' and method_name='efficient-mincost-adjustment'. If
not costs are provided by the user, and method_name='efficient-mincost-adjustment', costs
are assumed to be equal to one for all variables in the graph. The structure of the list should
be of the form [(node, {"cost": x}) for node in nodes].
:param conditional_node_names: variables that are used to determine treatment. If none are
provided, it is assumed that the intervention sets the treatment to a constant.
:returns: backdoor_sets, a list of dictionaries, with each dictionary
having as values a backdoor set.
"""
if costs is None and self.method_name == "efficient-mincost-adjustment":
self.logger.warning(
"No costs were passed, so they will be assumed to be constant and equal to 1."
)
efficient_bd = EfficientBackdoor(
graph=self._graph,
conditional_node_names=conditional_node_names,
costs=costs,
)
if self.method_name == "efficient-adjustment":
backdoor_set = efficient_bd.optimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif self.method_name == "efficient-minimal-adjustment":
backdoor_set = efficient_bd.optimal_minimal_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
elif self.method_name == "efficient-mincost-adjustment":
backdoor_set = efficient_bd.optimal_mincost_adj_set()
backdoor_sets = [{"backdoor_set": tuple(backdoor_set)}]
return backdoor_sets
def find_valid_adjustment_sets(
self,
treatment_name,
outcome_name,
backdoor_paths,
bdoor_graph,
dseparation_algo,
backdoor_sets,
filt_eligible_variables,
method_name,
max_iterations,
):
num_iterations = 0
found_valid_adjustment_set = False
all_nodes_observed = self._graph.all_observed(self._graph.get_all_nodes())
# If `minimal-adjustment` method is specified, start the search from the set with minimum size. Otherwise, start from the largest.
set_sizes = (
range(1, len(filt_eligible_variables) + 1, 1)
if method_name == CausalIdentifier.BACKDOOR_MIN
else range(len(filt_eligible_variables), 0, -1)
)
for size_candidate_set in set_sizes:
for candidate_set in itertools.combinations(
filt_eligible_variables, size_candidate_set
):
check = self._graph.check_valid_backdoor_set(
treatment_name,
outcome_name,
candidate_set,
backdoor_paths=backdoor_paths,
new_graph=bdoor_graph,
dseparation_algo=dseparation_algo,
)
self.logger.debug(
"Candidate backdoor set: {0}, is_dseparated: {1}".format(
candidate_set, check["is_dseparated"]
)
)
if check["is_dseparated"]:
backdoor_sets.append({"backdoor_set": candidate_set})
found_valid_adjustment_set = True
num_iterations += 1
if (
method_name == CausalIdentifier.BACKDOOR_EXHAUSTIVE
and num_iterations > max_iterations
):
self.logger.warning(
f"Max number of iterations {max_iterations} reached."
)
break
# If the backdoor method is `maximal-adjustment` or `minimal-adjustment`, return the first found adjustment set.
if (
method_name
in {
CausalIdentifier.BACKDOOR_DEFAULT,
CausalIdentifier.BACKDOOR_MAX,
CausalIdentifier.BACKDOOR_MIN,
}
and found_valid_adjustment_set
):
break
# If all variables are observed, and the biggest eligible set
# does not satisfy backdoor, then none of its subsets will.
if (
method_name
in {CausalIdentifier.BACKDOOR_DEFAULT, CausalIdentifier.BACKDOOR_MAX}
and all_nodes_observed
):
break
if num_iterations > max_iterations:
self.logger.warning(
f"Max number of iterations {max_iterations} reached. Could not find a valid backdoor set."
)
break
return backdoor_sets, found_valid_adjustment_set
def get_default_backdoor_set_id(self, backdoor_sets_dict):
# Adding a None estimand if no backdoor set found
if len(backdoor_sets_dict) == 0:
return None
# Default set contains minimum possible number of instrumental variables, to prevent lowering variance in the treatment variable.
instrument_names = set(
self._graph.get_instruments(self.treatment_name, self.outcome_name)
)
iv_count_dict = {
key: len(set(bdoor_set).intersection(instrument_names))
for key, bdoor_set in backdoor_sets_dict.items()
}
min_iv_count = min(iv_count_dict.values())
min_iv_keys = {
key for key, iv_count in iv_count_dict.items() if iv_count == min_iv_count
}
min_iv_backdoor_sets_dict = {
key: backdoor_sets_dict[key] for key in min_iv_keys
}
# Default set is the one with the least number of adjustment variables (optimizing for efficiency)
min_set_length = 1000000
default_key = None
for key, bdoor_set in min_iv_backdoor_sets_dict.items():
if len(bdoor_set) < min_set_length:
min_set_length = len(bdoor_set)
default_key = key
return default_key
def build_backdoor_estimands_dict(
self,
treatment_name,
outcome_name,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=None,
):
"""Build the final dict for backdoor sets by filtering unobserved variables if needed.
"""
backdoor_variables_dict = {}
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
is_identified = [
self._graph.all_observed(bset["backdoor_set"]) for bset in backdoor_sets
]
if any(is_identified):
self.logger.info("Causal effect can be identified.")
backdoor_sets_arr = [
list(bset["backdoor_set"])
for bset in backdoor_sets
if self._graph.all_observed(bset["backdoor_set"])
]
else: # there is unobserved confounding
self.logger.warning("Backdoor identification failed.")
backdoor_sets_arr = []
for i in range(len(backdoor_sets_arr)):
backdoor_estimand_expr = self.construct_backdoor_estimand(
self.estimand_type, treatment_name, outcome_name, backdoor_sets_arr[i]
)
self.logger.debug("Identified expression = " + str(backdoor_estimand_expr))
estimands_dict["backdoor" + str(i + 1)] = backdoor_estimand_expr
backdoor_variables_dict["backdoor" + str(i + 1)] = backdoor_sets_arr[i]
return estimands_dict, backdoor_variables_dict
def identify_frontdoor(self, dseparation_algo="default"):
""" Find a valid frontdoor variable if it exists.
Currently only supports a single variable frontdoor set.
"""
frontdoor_var = None
frontdoor_paths = None
fdoor_graph = None
if dseparation_algo == "default":
cond1_graph = self._graph.do_surgery(
self.treatment_name, remove_incoming_edges=True
)
bdoor_graph1 = self._graph.do_surgery(
self.treatment_name, remove_outgoing_edges=True
)
elif dseparation_algo == "naive":
frontdoor_paths = self._graph.get_all_directed_paths(
self.treatment_name, self.outcome_name
)
else:
raise ValueError(
f"d-separation algorithm {dseparation_algo} is not supported"
)
eligible_variables = (
self._graph.get_descendants(self.treatment_name)
- set(self.outcome_name)
- set(self._graph.get_descendants(self.outcome_name))
)
# For simplicity, assuming a one-variable frontdoor set
for candidate_var in eligible_variables:
# Cond 1: All directed paths intercepted by candidate_var
cond1 = self._graph.check_valid_frontdoor_set(
self.treatment_name,
self.outcome_name,
parse_state(candidate_var),
frontdoor_paths=frontdoor_paths,
new_graph=cond1_graph,
dseparation_algo=dseparation_algo,
)
self.logger.debug(
"Candidate frontdoor set: {0}, is_dseparated: {1}".format(
candidate_var, cond1
)
)
if not cond1:
continue
# Cond 2: No confounding between treatment and candidate var
cond2 = self._graph.check_valid_backdoor_set(
self.treatment_name,
parse_state(candidate_var),
set(),
backdoor_paths=None,
new_graph=bdoor_graph1,
dseparation_algo=dseparation_algo,
)
if not cond2:
continue
# Cond 3: treatment blocks all confounding between candidate_var and outcome
bdoor_graph2 = self._graph.do_surgery(
candidate_var, remove_outgoing_edges=True
)
cond3 = self._graph.check_valid_backdoor_set(
parse_state(candidate_var),
self.outcome_name,
self.treatment_name,
backdoor_paths=None,
new_graph=bdoor_graph2,
dseparation_algo=dseparation_algo,
)
is_valid_frontdoor = cond1 and cond2 and cond3
if is_valid_frontdoor:
frontdoor_var = candidate_var
break
return parse_state(frontdoor_var)
def identify_mediation(self):
""" Find a valid mediator if it exists.
Currently only supports a single variable mediator set.
"""
mediation_var = None
mediation_paths = self._graph.get_all_directed_paths(
self.treatment_name, self.outcome_name
)
eligible_variables = self._graph.get_descendants(self.treatment_name) - set(
self.outcome_name
)
# For simplicity, assuming a one-variable mediation set
for candidate_var in eligible_variables:
is_valid_mediation = self._graph.check_valid_mediation_set(
self.treatment_name,
self.outcome_name,
parse_state(candidate_var),
mediation_paths=mediation_paths,
)
self.logger.debug(
"Candidate mediation set: {0}, on_mediating_path: {1}".format(
candidate_var, is_valid_mediation
)
)
if is_valid_mediation:
mediation_var = candidate_var
break
return parse_state(mediation_var)
return None
def identify_mediation_first_stage_confounders(
self, treatment_name, mediators_names
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(treatment_name, mediators_names)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
treatment_name,
mediators_names,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
return backdoor_variables_dict
def identify_mediation_second_stage_confounders(
self, mediators_names, outcome_name
):
# Create estimands dict as per the API for backdoor, but do not return it
estimands_dict = {}
backdoor_sets = self.identify_backdoor(mediators_names, outcome_name)
estimands_dict, backdoor_variables_dict = self.build_backdoor_estimands_dict(
mediators_names,
outcome_name,
backdoor_sets,
estimands_dict,
proceed_when_unidentifiable=True,
)
# Setting default "backdoor" identification adjustment set
default_backdoor_id = self.get_default_backdoor_set_id(backdoor_variables_dict)
estimands_dict["backdoor"] = estimands_dict.get(str(default_backdoor_id), None)
backdoor_variables_dict["backdoor"] = backdoor_variables_dict.get(
str(default_backdoor_id), None
)
return backdoor_variables_dict
def construct_backdoor_estimand(
self, estimand_type, treatment_name, outcome_name, common_causes
):
# TODO: outputs string for now, but ideally should do symbolic
# expressions Mon 19 Feb 2018 04:54:17 PM DST
# TODO Better support for multivariate treatments
expr = None
outcome_name = outcome_name[0]
num_expr_str = outcome_name
if len(common_causes) > 0:
num_expr_str += "|" + ",".join(common_causes)
expr = "d(" + num_expr_str + ")/d" + ",".join(treatment_name)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_treatment_symbols = [sp.Symbol(t) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_conditional_outcome = spstats.Expectation(sym_outcome)
sym_effect = sp.Derivative(sym_conditional_outcome, sym_treatment)
sym_assumptions = {
"Unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{0},{2},U) = P({1}|{0},{2})"
).format(",".join(treatment_name), outcome_name, ",".join(common_causes))
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_iv_estimand(
self, estimand_type, treatment_name, outcome_name, instrument_names
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_instrument_symbols = [sp.Symbol(inst) for inst in instrument_names]
sym_instrument = sp.Array(sym_instrument_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(
sym_outcome_derivative / sym_treatment_derivative
)
sym_assumptions = {
"As-if-random": (
"If U\N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{0} then "
"\N{NOT SIGN}(U \N{RIGHTWARDS ARROW}\N{RIGHTWARDS ARROW}{{{1}}})"
).format(outcome_name, ",".join(instrument_names)),
"Exclusion": (
"If we remove {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}, then "
"\N{NOT SIGN}({{{0}}}\N{RIGHTWARDS ARROW}{2})"
).format(
",".join(instrument_names), ",".join(treatment_name), outcome_name
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_frontdoor_estimand(
self, estimand_type, treatment_name, outcome_name, frontdoor_variables_names
):
# TODO: support multivariate treatments better.
expr = None
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_frontdoor_symbols = [sp.Symbol(inst) for inst in frontdoor_variables_names]
sym_frontdoor = sp.Array(sym_frontdoor_symbols) # ",".join(instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_frontdoor)
sym_treatment_derivative = sp.Derivative(sym_frontdoor, sym_treatment)
sym_effect = spstats.Expectation(
sym_treatment_derivative * sym_outcome_derivative
)
sym_assumptions = {
"Full-mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(frontdoor_variables_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(frontdoor_variables_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name),
outcome_name,
",".join(frontdoor_variables_names),
),
}
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
def construct_mediation_estimand(
self, estimand_type, treatment_name, outcome_name, mediators_names
):
# TODO: support multivariate treatments better.
expr = None
if estimand_type in (
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE,
):
outcome_name = outcome_name[0]
sym_outcome = spstats.Normal(outcome_name, 0, 1)
sym_treatment_symbols = [spstats.Normal(t, 0, 1) for t in treatment_name]
sym_treatment = sp.Array(sym_treatment_symbols)
sym_mediators_symbols = [sp.Symbol(inst) for inst in mediators_names]
sym_mediators = sp.Array(sym_mediators_symbols)
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_mediators)
sym_treatment_derivative = sp.Derivative(sym_mediators, sym_treatment)
# For direct effect
num_expr_str = outcome_name
if len(mediators_names) > 0:
num_expr_str += "|" + ",".join(mediators_names)
sym_mu = sp.Symbol("mu")
sym_sigma = sp.Symbol("sigma", positive=True)
sym_conditional_outcome = spstats.Normal(num_expr_str, sym_mu, sym_sigma)
sym_directeffect_derivative = sp.Derivative(
sym_conditional_outcome, sym_treatment
)
if estimand_type == CausalIdentifier.NONPARAMETRIC_NIE:
sym_effect = spstats.Expectation(
sym_treatment_derivative * sym_outcome_derivative
)
elif estimand_type == CausalIdentifier.NONPARAMETRIC_NDE:
sym_effect = spstats.Expectation(sym_directeffect_derivative)
sym_assumptions = {
"Mediation": (
"{2} intercepts (blocks) all directed paths from {0} to {1} except the path {{{0}}}\N{RIGHTWARDS ARROW}{{{1}}}."
).format(
",".join(treatment_name),
",".join(outcome_name),
",".join(mediators_names),
),
"First-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{0}}} and U\N{RIGHTWARDS ARROW}{{{1}}}"
" then P({1}|{0},U) = P({1}|{0})"
).format(",".join(treatment_name), ",".join(mediators_names)),
"Second-stage-unconfoundedness": (
"If U\N{RIGHTWARDS ARROW}{{{2}}} and U\N{RIGHTWARDS ARROW}{1}"
" then P({1}|{2}, {0}, U) = P({1}|{2}, {0})"
).format(
",".join(treatment_name), outcome_name, ",".join(mediators_names)
),
}
else:
raise ValueError(
"Estimand type not supported. Supported estimand types are {0} or {1}'.".format(
CausalIdentifier.NONPARAMETRIC_NDE,
CausalIdentifier.NONPARAMETRIC_NIE,
)
)
estimand = {"estimand": sym_effect, "assumptions": sym_assumptions}
return estimand
class IdentifiedEstimand:
"""Class for storing a causal estimand, typically as a result of the identification step.
"""
def __init__(
self,
identifier,
treatment_variable,
outcome_variable,
estimand_type=None,
estimands=None,
backdoor_variables=None,
instrumental_variables=None,
frontdoor_variables=None,
mediator_variables=None,
mediation_first_stage_confounders=None,
mediation_second_stage_confounders=None,
default_backdoor_id=None,
identifier_method=None,
no_directed_path=False,
):
self.identifier = identifier
self.treatment_variable = parse_state(treatment_variable)
self.outcome_variable = parse_state(outcome_variable)
self.backdoor_variables = backdoor_variables
self.instrumental_variables = parse_state(instrumental_variables)
self.frontdoor_variables = parse_state(frontdoor_variables)
self.mediator_variables = parse_state(mediator_variables)
self.mediation_first_stage_confounders = mediation_first_stage_confounders
self.mediation_second_stage_confounders = mediation_second_stage_confounders
self.estimand_type = estimand_type
self.estimands = estimands
self.default_backdoor_id = default_backdoor_id
self.identifier_method = identifier_method
self.no_directed_path = no_directed_path
def set_identifier_method(self, identifier_name):
self.identifier_method = identifier_name
def get_backdoor_variables(self, key=None):
""" Return a list containing the backdoor variables.
If the calling estimator method is a backdoor method, return the
backdoor variables corresponding to its target estimand.
Otherwise, return the backdoor variables for the default backdoor estimand.
"""
if key is None:
if self.identifier_method and self.identifier_method.startswith("backdoor"):
return self.backdoor_variables[self.identifier_method]
elif (
self.backdoor_variables is not None and len(self.backdoor_variables) > 0
):
return self.backdoor_variables[self.default_backdoor_id]
else:
return []
else:
return self.backdoor_variables[key]
def set_backdoor_variables(self, bdoor_variables_arr, key=None):
if key is None:
key = self.identifier_method
self.backdoor_variables[key] = bdoor_variables_arr
def get_frontdoor_variables(self):
"""Return a list containing the frontdoor variables (if present)
"""
return self.frontdoor_variables
def get_mediator_variables(self):
"""Return a list containing the mediator variables (if present)
"""
return self.mediator_variables
def get_instrumental_variables(self):
"""Return a list containing the instrumental variables (if present)
"""
return self.instrumental_variables
def __deepcopy__(self, memo):
return IdentifiedEstimand(
self.identifier, # not deep copied
copy.deepcopy(self.treatment_variable),
copy.deepcopy(self.outcome_variable),
estimand_type=copy.deepcopy(self.estimand_type),
estimands=copy.deepcopy(self.estimands),
backdoor_variables=copy.deepcopy(self.backdoor_variables),
instrumental_variables=copy.deepcopy(self.instrumental_variables),
frontdoor_variables=copy.deepcopy(self.frontdoor_variables),
mediator_variables=copy.deepcopy(self.mediator_variables),
default_backdoor_id=copy.deepcopy(self.default_backdoor_id),
identifier_method=copy.deepcopy(self.identifier_method),
)
def __str__(self, only_target_estimand=False, show_all_backdoor_sets=False):
if self.no_directed_path:
s = "No directed path from {0} to {1} in the causal graph.".format(
self.treatment_variable, self.outcome_variable
)
s += "\nCausal effect is zero."
return s
s = "Estimand type: {0}\n".format(self.estimand_type)
i = 1
has_valid_backdoor = sum("backdoor" in key for key in self.estimands.keys())
for k, v in self.estimands.items():
if show_all_backdoor_sets:
# Do not show backdoor key unless it is the only backdoor set.
if k == "backdoor" and has_valid_backdoor > 1:
continue
else:
# Just show the default backdoor set
if k.startswith("backdoor") and k != "backdoor":
continue
if only_target_estimand and k != self.identifier_method:
continue
s += "\n### Estimand : {0}\n".format(i)
s += "Estimand name: {0}".format(k)
if k == self.default_backdoor_id:
s += " (Default)"
s += "\n"
if v is None:
s += "No such variable(s) found!\n"
else:
sp_expr_str = sp.pretty(v["estimand"], use_unicode=True)
s += "Estimand expression:\n{0}\n".format(sp_expr_str)
j = 1
for ass_name, ass_str in v["assumptions"].items():
s += "Estimand assumption {0}, {1}: {2}\n".format(
j, ass_name, ass_str
)
j += 1
i += 1
return s
| esmucler | 77f7064c85b3de740b5beefb0ad7067ffdb024fd | 824f5532c0644812867bb018bf22585f2466960c | See my answer in the [comment](https://github.com/py-why/dowhy/pull/549#issuecomment-1188730376) below | esmucler | 374 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | this is very nice, you can add the description of g, g_s, alpha etc to the notebook also | amit-sharma | 375 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | can be performed --> will be automatically chosen | amit-sharma | 376 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | Q: given that the array effect_strength_on_treatment (and other parameters) have different interpretations by methods, does that mean that the user would have to pass in different array values based on what simulation_method they chose? Or is the same set of values valid and roughly similar enough that the user can change the simulation_method without also changing these other parameters?
Where I'm going is that, if the parameters have different interpretations and different values for different values of simulation_method, maybe there should be different user-facing methods for the different simulation_methods?
If there's a benefit to reusing structural code, then this code could be an internal helper function... | emrekiciman | 377 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | We don't have the architecture for it yet --- but an if condition such as this one --- that is checking to see what estimator was used --- makes me think that we should be prescribing the sensitivity analysis method at the same place in the code where we set the estimator method. Perhaps the estimator object registers what sensitivity analysis method should be used, or perhaps both are registered together. Spreading this logic across different parts of the code is going to cause trouble at some point.
This is just me thinking out loud. I don't expect a change in this PR. | emrekiciman | 378 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | great point, Emre. Yes, let's take it up when we discuss how to refactor refutations to work across the estimation pipeline. | amit-sharma | 379 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | This is a good point. For all the partial-r2 based methods (linear and non-parametric), it has the same interpretation.
But the interpretation is different between direct_simulation and partial-r2 based simulations.
So to avoid confusion, I have created two additional parameters. The first two (effect_strength) are used for direct_simulation, while the next two (partial_r2_confounder) are used for the partial-r2 based methods.
This is still a hacky solution. The ideal solution will be achieved in the new API where we can wrap the method specific parameters within the call to the method class. (and create separate method names and remove the simulation method parameter).
```
refute_estimate(..., method=AddSimulatedConfounder(effect_strength_on_treatment=a, effect_strength_on_outcome=b))
# if we want to use partial R2 methods
refute_estimate(..., method=NonParametricPartialR2Sensitivity(partial_r2_confounder_treatment=a, partial_r2_confounder_outcome=b))
```
| amit-sharma | 380 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
Supports additional parameters that can be specified in the refute_estimate() method.
- 'confounders_effect_on_treatment': how the simulated confounder affects the value of treatment. This can be linear (for continuous treatment) or binary_flip (for binary treatment)
- 'confounders_effect_on_outcome': how the simulated confounder affects the value of outcome. This can be linear (for continuous outcome) or binary_flip (for binary outcome)
- 'effect_strength_on_treatment': parameter for the strength of the effect of simulated confounder on treatment. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of treatment from 0 to 1 (or vice-versa).
- 'effect_strength_on_outcome': parameter for the strength of the effect of simulated confounder on outcome. For linear effect, it is the regression coeffient. For binary_flip, it is the probability that simulated confounder's effect flips the value of outcome from 0 to 1 (or vice-versa).
TODO: Needs an interpretation module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
If effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: This refers to the strength of the confounder on treatment. For a linear effect, it behaves like the regression coeffecient. For a binary flip it is the probability with which it can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: This refers to the strength of the confounder on outcome. For a linear effect, it behaves like the regression coefficient. For a binary flip, it is the probability with which it can invert the value of the outcome.
:param effect_fraction_on_treatment: float: If effect_strength_on_treatment is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param effect_fraction_on_outcome: float: If effect_strength_on_outcome is not provided, this parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param simulated_method_name: method type to add unobserved common cause. "linear-partial-R2" for linear sensitivity analysis
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
To override the setting, set plot_estimate = False.
"""
super().__init__(*args, **kwargs)
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.simulated_method_name = (
kwargs["simulated_method_name"] if "simulated_method_name" in kwargs else "linear_based"
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulated_method_name == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| import copy
import logging
import math
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
import dowhy.causal_estimators.econml
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_refuter import CausalRefutation, CausalRefuter
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2 or non-parametric-partial-R2"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def infer_default_kappa_t(self, len_kappa_t=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = self._data[self._treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
self.logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(self, min_coeff, max_coeff, effect_strength_fraction):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def infer_default_kappa_y(self, len_kappa_y=10):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = self._data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = self._data[self._outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if self.effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif self.effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
min_coeff, max_coeff = self._compute_min_max_coeff(min_coeff, max_coeff, self.frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
self.logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def refute_estimate(self, show_progress_bar=False):
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if self.simulation_method == "linear-partial-R2":
if not (isinstance(self._estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError(
"Currently only LinearRegressionEstimator is supported for Sensitivity Analysis"
)
if len(self._estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if self.frac_strength_outcome == 1:
self.frac_strength_outcome = self.frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=self._estimate.estimator,
data=self._data,
treatment_name=self._treatment_name,
percent_change_estimate=self.percent_change_estimate,
significance_level=self.significance_level,
benchmark_common_causes=self.benchmark_common_causes,
null_hypothesis_effect=self.null_hypothesis_effect,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
common_causes_order=self._estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.simulation_method == "non-parametric-partial-R2":
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(self._estimate.estimator, dowhy.causal_estimators.econml.Econml):
if self._estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=self._estimate._estimator_object,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=self._estimate.estimator,
observed_common_causes=self._estimate.estimator._observed_common_causes,
treatment=self._estimate.estimator._treatment,
outcome=self._estimate.estimator._outcome,
alpha_s_estimator_list=self.alpha_s_estimator_list,
alpha_s_estimator_param_list=self.alpha_s_estimator_param_list,
g_s_estimator_list=self.g_s_estimator_list,
g_s_estimator_param_list=self.g_s_estimator_param_list,
effect_strength_treatment=self.kappa_t,
effect_strength_outcome=self.kappa_y,
benchmark_common_causes=self.benchmark_common_causes,
frac_strength_treatment=self.frac_strength_treatment,
frac_strength_outcome=self.frac_strength_outcome,
theta_s=self._estimate.value,
plugin_reisz=self.plugin_reisz,
)
analyzer.check_sensitivity(plot=self.plot_estimate)
return analyzer
if self.kappa_t is None:
self.kappa_t = self.infer_default_kappa_t()
if self.kappa_y is None:
self.kappa_y = self.infer_default_kappa_y()
if not isinstance(self.kappa_t, (list, np.ndarray)) and not isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(self._data)
new_data = self.include_confounders_effect(new_data, self.kappa_t, self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(new_data, self._target_estimand, self._estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
refute.add_refuter(self)
return refute
else: # Deal with multiple value inputs
if isinstance(self.kappa_t, (list, np.ndarray)) and isinstance(
self.kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(
len(self.kappa_t), len(self.kappa_y)
) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(self.kappa_y)):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y[j])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = self._estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if self.plotmethod == "contour":
cp = plt.contourf(self.kappa_y, self.kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif self.plotmethod == "colormesh":
cp = plt.pcolormesh(self.kappa_y, self.kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(self.kappa_t)
ax.xaxis.set_ticks(self.kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(self.kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_t))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t[i], self.kappa_y)
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_t, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(self.kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(self.kappa_y))
orig_data = copy.deepcopy(self._data)
for i in tqdm(
range(0, len(self.kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = self.include_confounders_effect(orig_data, self.kappa_t, self.kappa_y[i])
new_estimator = CausalEstimator.get_estimator_object(
new_data, self._target_estimand, self._estimate
)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
self._estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
self.logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
refute.add_refuter(self)
if self.plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(self.kappa_y, outcomes)
plt.axhline(self._estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
def include_confounders_effect(self, new_data, kappa_t, kappa_y):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = self._data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if self.effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, self._treatment_name]
)
for tname in self._treatment_name:
if pd.api.types.is_bool_dtype(self._data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif self.effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._treatment_name] = new_data[self._treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_t + "' method not supported for confounders' effect on treatment"
)
if self.effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, self._outcome_name] = (
1 - new_data.loc[rel_interval <= w_random, self._outcome_name]
)
for yname in self._outcome_name:
if pd.api.types.is_bool_dtype(self._data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif self.effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[self._outcome_name] = new_data[self._outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError(
"'" + self.effect_on_y + "' method not supported for confounders' effect on outcome"
)
return new_data
def include_simulated_confounder(self, convergence_threshold=0.1, c_star_max=1000):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = self.choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + self._treatment_name + self._outcome_name
# Taking a subset of the dataframe that has only observed variables
self._data = self._data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = self._data[self._outcome_name[0]]
observed_variables_with_treatment = observed_variables + self._treatment_name
X = self._data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = self._data[self._treatment_name[0]].astype("int64")
X = self._data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = self._data[i]
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if self.kappa_t is not None:
correlation_t_observed = self.kappa_t
if self.kappa_y is not None:
correlation_y_observed = self.kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = self.generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = self._data[self._outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(
product_cor_metric_simulated - product_cor_metric_observed
):
min_distance_between_product_cor_metrics = abs(
product_cor_metric_simulated - product_cor_metric_observed
)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = self.generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def generate_confounder_from_residuals(self, c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | Thanks for making this change. It is clearer, and will hopefully also make it easier to separate further in the future. | emrekiciman | 381 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/linear_sensitivity_analyzer.py | import logging
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy.stats import t
from dowhy.utils.api import parse_state
class LinearSensitivityAnalyzer:
"""
Class to perform sensitivity analysis
See: https://carloscinelli.com/files/Cinelli%20and%20Hazlett%20(2020)%20-%20Making%20Sense%20of%20Sensitivity.pdf
:param estimator: linear estimator of the causal model
:param data: Pandas dataframe
:param treatment_name: name of treatment
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = True)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param frac_strength_treatment: strength of association between unobserved confounder and treatment compared to benchmark covariate
:param frac_strength_outcome: strength of association between unobserved confounder and outcome compared to benchmark covariate
:param common_causes_order: The order of column names in OLS regression data
"""
def __init__(
self,
estimator=None,
data=None,
treatment_name=None,
percent_change_estimate=1.0,
significance_level=0.05,
confounder_increases_estimate=True,
benchmark_common_causes=None,
null_hypothesis_effect=0,
frac_strength_treatment=None,
frac_strength_outcome=None,
common_causes_order=None,
):
self.data = data
self.treatment_name = []
# original_treatment_name: : stores original variable names for labelling
self.original_treatment_name = treatment_name
for t in range(len(treatment_name)):
self.treatment_name.append("x" + str(t + 1))
self.percent_change_estimate = percent_change_estimate
self.significance_level = significance_level
self.confounder_increases_estimate = confounder_increases_estimate
self.estimator = estimator
self.estimator_model = estimator.model
self.null_hypothesis_effect = null_hypothesis_effect
# common_causes_map : maps the original variable names to variable names in OLS regression
self.common_causes_map = {}
for i in range(len(common_causes_order)):
self.common_causes_map[common_causes_order[i]] = "x" + str(len(self.treatment_name) + i + 1)
# benchmark_common_causes: stores variable names in terms of regression model variables
benchmark_common_causes = parse_state(benchmark_common_causes)
self.benchmark_common_causes = []
# original_benchmark_covariates: stores original variable names for labelling
self.original_benchmark_covariates = benchmark_common_causes
for i in range(len(benchmark_common_causes)):
self.benchmark_common_causes.append(self.common_causes_map[benchmark_common_causes[i]])
if type(frac_strength_treatment) in [int, list, float]:
self.frac_strength_treatment = np.array(frac_strength_treatment)
if type(frac_strength_outcome) in [int, list, float]:
self.frac_strength_outcome = np.array(frac_strength_outcome)
# estimate: estimate of regression
self.estimate = None
# degree_of_freedom: degree of freedom of error in regression
self.degree_of_freedom = None
# standard_error: standard error in regression
self.standard_error = None
# t_stats: Treatment coefficient t-value - measures how many standard errors the estimate is away from zero.
self.t_stats = None
# partial_f2: value to determine if a regression model and a nested version of it have a statistically significant difference between them
self.partial_f2 = None
# r2tu_w: partial R^2 of unobserved confounder "u" with treatment "t", after conditioning on observed covariates "w"
self.r2tu_w = None
# r2yu_tw: partial R^2 of unobserved confounder "u" with outcome "y", after conditioning on observed covariates "w" and treatment "t"
self.r2yu_tw = None
# r2twj_w: partial R^2 of observed covariate wj with treatment "t", after conditioning on observed covariates "w" excluding wj
self.r2twj_w = None
# r2ywj_tw: partial R^2 of observed covariate wj with outcome "y", after conditioning on observed covariates "w" (excluding wj) and treatment "t"
self.r2ywj_tw = None
# benchmarking_results: dataframe containing information about bounds and bias adjusted terms
self.benchmarking_results = None
# stats: dictionary containing information like robustness value, partial R^2, estimate, standard error , degree of freedom, partial f^2, t-statistic
self.stats = None
self.logger = logging.getLogger(__name__)
def treatment_regression(self):
"""
Function to perform regression with treatment as outcome
:returns: new OLS regression model
"""
features = self.estimator._observed_common_causes.copy()
treatment_df = self.estimator._treatment.copy()
features = sm.tools.add_constant(features)
features.rename(columns=self.common_causes_map, inplace=True)
model = sm.OLS(treatment_df, features)
estimator_model = model.fit()
return estimator_model
def partial_r2_func(self, estimator_model=None, treatment=None):
"""
Computes the partial R^2 of regression model
:param estimator_model: Linear regression model
:param treatment: treatment name
:returns: partial R^2 value
"""
estimate = estimator_model.params[treatment]
degree_of_freedom = int(estimator_model.df_resid)
if np.isscalar(estimate): # for single covariate
t_stats = estimator_model.tvalues[treatment]
return t_stats**2 / (t_stats**2 + degree_of_freedom)
else: # compute for a group of covariates
covariance_matrix = estimator_model.cov_params().loc[treatment, :][treatment]
n = len(estimate) # number of parameters in model
f_stat = (
np.matmul(np.matmul(estimate.values.T, np.linalg.inv(covariance_matrix.values)), estimate.values) / n
)
return f_stat * n / (f_stat * n + degree_of_freedom)
def robustness_value_func(self, alpha=1.0):
"""
Function to calculate the robustness value.
It is the minimum strength of association that confounders must have with treatment and outcome to change conclusions.
Robustness value describes how strong the association must be in order to reduce the estimated effect by (100 * percent_change_estimate)%.
Robustness value close to 1 means the treatment effect can handle strong confounders explaining almost all residual variation of the treatment and the outcome.
Robustness value close to 0 means that even very weak confounders can also change the results.
:param alpha: confidence interval (default = 1)
:returns: robustness value
"""
partial_cohen_f = abs(
self.t_stats / np.sqrt(self.degree_of_freedom)
) # partial f of treatment t with outcome y. f = t_val/sqrt(dof)
f_q = self.percent_change_estimate * partial_cohen_f
t_alpha_df_1 = t.ppf(
alpha / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
f_critical = abs(t_alpha_df_1) / np.sqrt(self.degree_of_freedom - 1)
f_adjusted = f_q - f_critical
if f_adjusted < 0:
r_value = 0
else:
r_value = 0.5 * (np.sqrt(f_adjusted**4 + (4 * f_adjusted**2)) - f_adjusted**2)
if f_adjusted > 0 and f_q > 1 / f_critical:
r_value = (f_q**2 - f_critical**2) / (1 + f_q**2)
return r_value
def compute_bias_adjusted(self, r2tu_w, r2yu_tw):
"""
Computes the bias adjusted estimate, standard error, t-value, partial R2, confidence intervals
:param r2tu_w: partial r^2 from regressing unobserved confounder u on treatment t after conditioning on observed covariates w
:param r2yu_tw: partial r^2 from regressing unobserved confounder u on outcome y after conditioning on observed covariates w and treatment t
:returns: Python dictionary with information about partial R^2 of confounders with treatment and outcome and bias adjusted variables
"""
bias_factor = np.sqrt((r2yu_tw * r2tu_w) / (1 - r2tu_w))
bias = bias_factor * (self.standard_error * np.sqrt(self.degree_of_freedom))
if self.confounder_increases_estimate:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) - bias)
else:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) + bias)
bias_adjusted_se = (
np.sqrt((1 - r2yu_tw) / (1 - r2tu_w))
* self.standard_error
* np.sqrt(self.degree_of_freedom / (self.degree_of_freedom - 1))
)
bias_adjusted_t = (bias_adjusted_estimate - self.null_hypothesis_effect) / bias_adjusted_se
bias_adjusted_partial_r2 = bias_adjusted_t**2 / (
bias_adjusted_t**2 + (self.degree_of_freedom - 1)
) # partial r2 formula used with new t value and dof - 1
num_se = t.ppf(
self.significance_level / 2, self.degree_of_freedom
) # Number of standard errors within Confidence Interval
bias_adjusted_upper_CI = bias_adjusted_estimate - num_se * bias_adjusted_se
bias_adjusted_lower_CI = bias_adjusted_estimate + num_se * bias_adjusted_se
benchmarking_results = {
"r2tu_w": r2tu_w,
"r2yu_tw": r2yu_tw,
"bias_adjusted_estimate": bias_adjusted_estimate,
"bias_adjusted_se": bias_adjusted_se,
"bias_adjusted_t": bias_adjusted_t,
"bias_adjusted_lower_CI": bias_adjusted_lower_CI,
"bias_adjusted_upper_CI": bias_adjusted_upper_CI,
}
return benchmarking_results
def check_sensitivity(self, plot=True):
"""
Function to perform sensitivity analysis.
:param plot: plot = True generates a plot of point estimate and the variations with respect to unobserved confounding.
plot = False overrides the setting
:returns: instance of LinearSensitivityAnalyzer class
"""
self.standard_error = np.array(self.estimator_model.bse[1 : (len(self.treatment_name) + 1)])[0]
self.degree_of_freedom = int(self.estimator_model.df_resid)
self.estimate = np.array(self.estimator_model.params[1 : (len(self.treatment_name) + 1)])[0]
self.t_stats = np.array(self.estimator_model.tvalues[self.treatment_name])[0]
# partial R^2 (r2yt_w) is the proportion of variation in outcome uniquely explained by treatment
partial_r2 = self.partial_r2_func(self.estimator_model, self.treatment_name)
RVq = self.robustness_value_func()
RV_qalpha = self.robustness_value_func(alpha=self.significance_level)
if self.confounder_increases_estimate:
self.null_hypothesis_effect = self.estimate * (1 - self.percent_change_estimate)
else:
self.null_hypothesis_effect = self.estimate * (1 + self.percent_change_estimate)
self.t_stats = (self.estimate - self.null_hypothesis_effect) / self.standard_error
self.partial_f2 = self.t_stats**2 / self.degree_of_freedom
# build a new regression model by considering treatment variables as outcome
treatment_linear_model = self.treatment_regression()
# r2twj_w is partial R^2 of covariate wj with treatment "t", after conditioning on covariates w(excluding wj)
# r2ywj_tw is partial R^2 of covariate wj with outcome "y", after conditioning on covariates w(excluding wj) and treatment "t"
self.r2twj_w = []
self.r2ywj_tw = []
for covariate in self.benchmark_common_causes:
self.r2ywj_tw.append(self.partial_r2_func(self.estimator_model, covariate))
self.r2twj_w.append(self.partial_r2_func(treatment_linear_model, covariate))
for i in range(len(self.benchmark_common_causes)):
r2twj_w = self.r2twj_w[i]
r2ywj_tw = self.r2ywj_tw[i]
# r2tu_w is the partial r^2 from regressing u on t after conditioning on w
self.r2tu_w = self.frac_strength_treatment * (r2twj_w / (1 - r2twj_w))
if any(val >= 1 for val in self.r2tu_w):
raise ValueError("r2tu_w can not be >= 1. Try a lower frac_strength_treatment value")
r2uwj_wt = (
self.frac_strength_treatment
* (r2twj_w**2)
/ ((1 - self.frac_strength_treatment * r2twj_w) * (1 - r2twj_w))
)
if any(val >= 1 for val in r2uwj_wt):
raise ValueError("r2uwj_wt can not be >= 1. Try a lower frac_strength_treatment value")
self.r2yu_tw = ((np.sqrt(self.frac_strength_outcome) + np.sqrt(r2uwj_wt)) / np.sqrt(1 - r2uwj_wt)) ** 2 * (
r2ywj_tw / (1 - r2ywj_tw)
)
if any(val > 1 for val in self.r2yu_tw):
for i in range(len(self.r2yu_tw)):
if self.r2yu_tw[i] > 1:
self.r2yu_tw[i] = 1
self.logger.warning(
"Warning: r2yu_tw can not be > 1. Try a lower frac_strength_treatment. Setting r2yu_tw to 1"
)
# Compute bias adjusted terms
self.benchmarking_results = self.compute_bias_adjusted(self.r2tu_w, self.r2yu_tw)
if plot == True:
self.plot()
self.stats = {
"estimate": self.estimate,
"standard_error": self.standard_error,
"degree of freedom": self.degree_of_freedom,
"t_statistic": self.t_stats,
"r2yt_w": partial_r2,
"partial_f2": self.partial_f2,
"robustness_value": RVq,
"robustness_value_alpha": RV_qalpha,
}
self.benchmarking_results = pd.DataFrame.from_dict(self.benchmarking_results)
return self
def plot_estimate(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting estimates.
Contour lines (z - axis) correspond to the adjusted estimate values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_estimate : threshold point
estimate_bounds : estimate values for unobserved confounders (bias adjusted estimates)
"""
critical_estimate = self.null_hypothesis_effect
contour_values = np.zeros((len(r2yu_tw), len(r2tu_w)))
for i in range(len(r2yu_tw)):
y = r2tu_w[i]
for j in range(len(r2tu_w)):
x = r2yu_tw[j]
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
estimate = benchmarking_results["bias_adjusted_estimate"]
contour_values[i][j] = estimate
estimate_bounds = self.benchmarking_results["bias_adjusted_estimate"]
return contour_values, critical_estimate, estimate_bounds
def plot_t(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting t.
Contour lines (z - axis) correspond to the adjusted t values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_t : threshold point
t_bounds : t-value for unobserved confounders (bias adjusted t values)
"""
t_alpha_df_1 = t.ppf(
self.significance_level / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
critical_t = abs(t_alpha_df_1) * np.sign(self.t_stats)
contour_values = []
for x in r2tu_w:
contour = []
for y in r2yu_tw:
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
t_value = benchmarking_results["bias_adjusted_t"]
contour.append(t_value)
contour_values.append(contour)
t_bounds = self.benchmarking_results["bias_adjusted_t"]
return contour_values, critical_t, t_bounds
def plot(
self,
plot_type="estimate",
critical_value=None,
x_limit=0.8,
y_limit=0.8,
num_points_per_contour=200,
plot_size=(7, 7),
contours_color="blue",
critical_contour_color="red",
label_fontsize=9,
contour_linewidths=0.75,
contour_linestyles="solid",
contours_label_color="black",
critical_label_color="red",
unadjusted_estimate_marker="D",
unadjusted_estimate_color="black",
adjusted_estimate_marker="^",
adjusted_estimate_color="red",
legend_position=(1.6, 0.6),
):
"""
Plots and summarizes the sensitivity bounds as a contour plot, as they vary with the partial R^2 of the unobserved confounder(s) with the treatment and the outcome
Two types of plots can be generated, based on adjusted estimates or adjusted t-values
X-axis: Partial R^2 of treatment and unobserved confounder(s)
Y-axis: Partial R^2 of outcome and unobserved confounder(s)
We also plot bounds on the partial R^2 of the unobserved confounders obtained from observed covariates
:param plot_type: "estimate" or "t-value"
:param critical_value: special reference value of the estimate or t-value that will be highlighted in the plot
:param x_limit: plot's maximum x_axis value (default = 0.8)
:param y_limit: plot's minimum y_axis value (default = 0.8)
:param num_points_per_contour: number of points to calculate and plot each contour line (default = 200)
:param plot_size: tuple denoting the size of the plot (default = (7,7))
:param contours_color: color of contour line (default = blue)
String or array. If array, lines will be plotted with the specific color in ascending order.
:param critical_contour_color: color of threshold line (default = red)
:param label_fontsize: fontsize for labelling contours (default = 9)
:param contour_linewidths: linewidths for contours (default = 0.75)
:param contour_linestyles: linestyles for contours (default = "solid")
See : https://matplotlib.org/3.5.0/gallery/lines_bars_and_markers/linestyles.html for more examples
:param contours_label_color: color of contour line label (default = black)
:param critical_label_color: color of threshold line label (default = red)
:param unadjusted_estimate_marker: marker type for unadjusted estimate in the plot (default = 'D')
See: https://matplotlib.org/stable/api/markers_api.html
:parm unadjusted_estimate_color: marker color for unadjusted estimate in the plot (default = "black")
:param adjusted_estimate_marker: marker type for bias adjusted estimates in the plot (default = '^')
:parm adjusted_estimate_color: marker color for bias adjusted estimates in the plot (default = "red")
:param legend_position:tuple denoting the position of the legend (default = (1.6, 0.6))
"""
# Plotting the contour plot
if plot_type == "estimate":
critical_value = 0 # default value of estimate
else:
critical_value = 2 # default t-value (usual approx for 95% CI)
fig, ax = plt.subplots(1, 1, figsize=plot_size)
ax.set_title("Sensitivity contour plot of %s" % plot_type)
ax.set_xlabel("Partial R^2 of confounder with treatment")
ax.set_ylabel("Partial R^2 of confounder with outcome")
for i in range(len(self.r2tu_w)):
x = self.r2tu_w[i]
y = self.r2yu_tw[i]
if x > 0.8 or y > 0.8:
x_limit = 0.99
y_limit = 0.99
break
r2tu_w = np.arange(0.0, x_limit, x_limit / num_points_per_contour)
r2yu_tw = np.arange(0.0, y_limit, y_limit / num_points_per_contour)
unadjusted_point_estimate = None
if plot_type == "estimate":
contour_values, critical_value, bound_values = self.plot_estimate(r2tu_w, r2yu_tw)
unadjusted_estimate = self.estimate
unadjusted_point_estimate = unadjusted_estimate
elif plot_type == "t-value":
contour_values, critical_value, bound_values = self.plot_t(r2tu_w, r2yu_tw)
unadjusted_t = self.t_stats
unadjusted_point_estimate = unadjusted_t
else:
raise ValueError("Current plotting method only supports 'estimate' and 't-value' ")
# Adding contours
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=contours_color,
linewidths=contour_linewidths,
linestyles=contour_linestyles,
)
ax.clabel(contour_plot, inline=1, fontsize=label_fontsize, colors=contours_label_color)
# Adding threshold contour line
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=critical_contour_color,
linewidths=contour_linewidths,
levels=[critical_value],
)
ax.clabel(contour_plot, [critical_value], inline=1, fontsize=label_fontsize, colors=critical_label_color)
# Adding unadjusted point estimate
ax.scatter(
[0],
[0],
marker=unadjusted_estimate_marker,
color=unadjusted_estimate_color,
label="Unadjusted({:1.2f})".format(unadjusted_point_estimate),
)
# Adding bounds to partial R^2 values for given strength of confounders
for i in range(len(self.frac_strength_treatment)):
frac_strength_treatment = self.frac_strength_treatment[i]
frac_strength_outcome = self.frac_strength_outcome[i]
if frac_strength_treatment == frac_strength_outcome:
signs = str(round(frac_strength_treatment, 2))
else:
signs = str(round(frac_strength_treatment, 2)) + "/" + str(round(frac_strength_outcome, 2))
label = (
str(i + 1)
+ " "
+ signs
+ " X "
+ str(self.original_benchmark_covariates)
+ " ({:1.2f}) ".format(bound_values[i])
)
ax.scatter(
self.r2tu_w[i],
self.r2yu_tw[i],
color=adjusted_estimate_color,
marker=adjusted_estimate_marker,
label=label,
)
ax.annotate(str(i + 1), (self.r2tu_w[i] + 0.005, self.r2yu_tw[i] + 0.005))
ax.legend(bbox_to_anchor=legend_position)
plt.show()
def __str__(self):
s = "Sensitivity Analysis to Unobserved Confounding using R^2 paramterization\n\n"
s += "Unadjusted Estimates of Treatment {0} :\n".format(self.original_treatment_name)
s += "Coefficient Estimate : {0}\n".format(self.estimate)
s += "Degree of Freedom : {0}\n".format(self.degree_of_freedom)
s += "Standard Error : {0}\n".format(self.standard_error)
s += "t-value : {0}\n".format(self.t_stats)
s += "F^2 value : {0}\n\n".format(self.partial_f2)
s += "Sensitivity Statistics : \n"
s += "Partial R2 of treatment with outcome : {0}\n".format(self.stats["r2yt_w"])
s += "Robustness Value : {0}\n\n".format(self.stats["robustness_value"])
s += "Interpretation of results :\n"
s += "Any confounder explaining less than {0}% percent of the residual variance of both the treatment and the outcome would not be strong enough to explain away the observed effect i.e bring down the estimate to 0 \n\n".format(
round(self.stats["robustness_value"] * 100, 2)
)
s += "For a significance level of {0}%, any confounder explaining more than {1}% percent of the residual variance of both the treatment and the outcome would be strong enough to make the estimated effect not 'statistically significant'\n\n".format(
self.significance_level * 100, round(self.stats["robustness_value_alpha"] * 100, 2)
)
s += "If confounders explained 100% of the residual variance of the outcome, they would need to explain at least {0}% of the residual variance of the treatment to bring down the estimated effect to 0\n".format(
round(self.stats["r2yt_w"] * 100, 2)
)
return s
| import logging
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy.stats import t
from dowhy.utils.api import parse_state
class LinearSensitivityAnalyzer:
"""
Class to perform sensitivity analysis
See: https://carloscinelli.com/files/Cinelli%20and%20Hazlett%20(2020)%20-%20Making%20Sense%20of%20Sensitivity.pdf
:param estimator: linear estimator of the causal model
:param data: Pandas dataframe
:param treatment_name: name of treatment
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = True)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param frac_strength_treatment: strength of association between unobserved confounder and treatment compared to benchmark covariate
:param frac_strength_outcome: strength of association between unobserved confounder and outcome compared to benchmark covariate
:param common_causes_order: The order of column names in OLS regression data
"""
def __init__(
self,
estimator=None,
data=None,
treatment_name=None,
percent_change_estimate=1.0,
significance_level=0.05,
confounder_increases_estimate=True,
benchmark_common_causes=None,
null_hypothesis_effect=0,
frac_strength_treatment=None,
frac_strength_outcome=None,
common_causes_order=None,
):
self.data = data
self.treatment_name = []
# original_treatment_name: : stores original variable names for labelling
self.original_treatment_name = treatment_name
for t in range(len(treatment_name)):
self.treatment_name.append("x" + str(t + 1))
self.percent_change_estimate = percent_change_estimate
self.significance_level = significance_level
self.confounder_increases_estimate = confounder_increases_estimate
self.estimator = estimator
self.estimator_model = estimator.model
self.null_hypothesis_effect = null_hypothesis_effect
# common_causes_map : maps the original variable names to variable names in OLS regression
self.common_causes_map = {}
for i in range(len(common_causes_order)):
self.common_causes_map[common_causes_order[i]] = "x" + str(len(self.treatment_name) + i + 1)
# benchmark_common_causes: stores variable names in terms of regression model variables
benchmark_common_causes = parse_state(benchmark_common_causes)
self.benchmark_common_causes = []
# original_benchmark_covariates: stores original variable names for labelling
self.original_benchmark_covariates = benchmark_common_causes
for i in range(len(benchmark_common_causes)):
self.benchmark_common_causes.append(self.common_causes_map[benchmark_common_causes[i]])
if type(frac_strength_treatment) in [int, list, float]:
self.frac_strength_treatment = np.array(frac_strength_treatment)
if type(frac_strength_outcome) in [int, list, float]:
self.frac_strength_outcome = np.array(frac_strength_outcome)
# estimate: estimate of regression
self.estimate = None
# degree_of_freedom: degree of freedom of error in regression
self.degree_of_freedom = None
# standard_error: standard error in regression
self.standard_error = None
# t_stats: Treatment coefficient t-value - measures how many standard errors the estimate is away from zero.
self.t_stats = None
# partial_f2: value to determine if a regression model and a nested version of it have a statistically significant difference between them
self.partial_f2 = None
# r2tu_w: partial R^2 of unobserved confounder "u" with treatment "t", after conditioning on observed covariates "w"
self.r2tu_w = None
# r2yu_tw: partial R^2 of unobserved confounder "u" with outcome "y", after conditioning on observed covariates "w" and treatment "t"
self.r2yu_tw = None
# r2twj_w: partial R^2 of observed covariate wj with treatment "t", after conditioning on observed covariates "w" excluding wj
self.r2twj_w = None
# r2ywj_tw: partial R^2 of observed covariate wj with outcome "y", after conditioning on observed covariates "w" (excluding wj) and treatment "t"
self.r2ywj_tw = None
# benchmarking_results: dataframe containing information about bounds and bias adjusted terms
self.benchmarking_results = None
# stats: dictionary containing information like robustness value, partial R^2, estimate, standard error , degree of freedom, partial f^2, t-statistic
self.stats = None
self.logger = logging.getLogger(__name__)
def treatment_regression(self):
"""
Function to perform regression with treatment as outcome
:returns: new OLS regression model
"""
features = self.estimator._observed_common_causes.copy()
treatment_df = self.estimator._treatment.copy()
features = sm.tools.add_constant(features)
features.rename(columns=self.common_causes_map, inplace=True)
model = sm.OLS(treatment_df, features)
estimator_model = model.fit()
return estimator_model
def partial_r2_func(self, estimator_model=None, treatment=None):
"""
Computes the partial R^2 of regression model
:param estimator_model: Linear regression model
:param treatment: treatment name
:returns: partial R^2 value
"""
estimate = estimator_model.params[treatment]
degree_of_freedom = int(estimator_model.df_resid)
if np.isscalar(estimate): # for single covariate
t_stats = estimator_model.tvalues[treatment]
return t_stats**2 / (t_stats**2 + degree_of_freedom)
else: # compute for a group of covariates
covariance_matrix = estimator_model.cov_params().loc[treatment, :][treatment]
n = len(estimate) # number of parameters in model
f_stat = (
np.matmul(np.matmul(estimate.values.T, np.linalg.inv(covariance_matrix.values)), estimate.values) / n
)
return f_stat * n / (f_stat * n + degree_of_freedom)
def robustness_value_func(self, alpha=1.0):
"""
Function to calculate the robustness value.
It is the minimum strength of association that confounders must have with treatment and outcome to change conclusions.
Robustness value describes how strong the association must be in order to reduce the estimated effect by (100 * percent_change_estimate)%.
Robustness value close to 1 means the treatment effect can handle strong confounders explaining almost all residual variation of the treatment and the outcome.
Robustness value close to 0 means that even very weak confounders can also change the results.
:param alpha: confidence interval (default = 1)
:returns: robustness value
"""
partial_cohen_f = abs(
self.t_stats / np.sqrt(self.degree_of_freedom)
) # partial f of treatment t with outcome y. f = t_val/sqrt(dof)
f_q = self.percent_change_estimate * partial_cohen_f
t_alpha_df_1 = t.ppf(
alpha / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
f_critical = abs(t_alpha_df_1) / np.sqrt(self.degree_of_freedom - 1)
f_adjusted = f_q - f_critical
if f_adjusted < 0:
r_value = 0
else:
r_value = 0.5 * (np.sqrt(f_adjusted**4 + (4 * f_adjusted**2)) - f_adjusted**2)
if f_adjusted > 0 and f_q > 1 / f_critical:
r_value = (f_q**2 - f_critical**2) / (1 + f_q**2)
return r_value
def compute_bias_adjusted(self, r2tu_w, r2yu_tw):
"""
Computes the bias adjusted estimate, standard error, t-value, partial R2, confidence intervals
:param r2tu_w: partial r^2 from regressing unobserved confounder u on treatment t after conditioning on observed covariates w
:param r2yu_tw: partial r^2 from regressing unobserved confounder u on outcome y after conditioning on observed covariates w and treatment t
:returns: Python dictionary with information about partial R^2 of confounders with treatment and outcome and bias adjusted variables
"""
bias_factor = np.sqrt((r2yu_tw * r2tu_w) / (1 - r2tu_w))
bias = bias_factor * (self.standard_error * np.sqrt(self.degree_of_freedom))
if self.confounder_increases_estimate:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) - bias)
else:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) + bias)
bias_adjusted_se = (
np.sqrt((1 - r2yu_tw) / (1 - r2tu_w))
* self.standard_error
* np.sqrt(self.degree_of_freedom / (self.degree_of_freedom - 1))
)
bias_adjusted_t = (bias_adjusted_estimate - self.null_hypothesis_effect) / bias_adjusted_se
bias_adjusted_partial_r2 = bias_adjusted_t**2 / (
bias_adjusted_t**2 + (self.degree_of_freedom - 1)
) # partial r2 formula used with new t value and dof - 1
num_se = t.ppf(
self.significance_level / 2, self.degree_of_freedom
) # Number of standard errors within Confidence Interval
bias_adjusted_upper_CI = bias_adjusted_estimate - num_se * bias_adjusted_se
bias_adjusted_lower_CI = bias_adjusted_estimate + num_se * bias_adjusted_se
benchmarking_results = {
"r2tu_w": r2tu_w,
"r2yu_tw": r2yu_tw,
"bias_adjusted_estimate": bias_adjusted_estimate,
"bias_adjusted_se": bias_adjusted_se,
"bias_adjusted_t": bias_adjusted_t,
"bias_adjusted_lower_CI": bias_adjusted_lower_CI,
"bias_adjusted_upper_CI": bias_adjusted_upper_CI,
}
return benchmarking_results
def check_sensitivity(self, plot=True):
"""
Function to perform sensitivity analysis.
:param plot: plot = True generates a plot of point estimate and the variations with respect to unobserved confounding.
plot = False overrides the setting
:returns: instance of LinearSensitivityAnalyzer class
"""
self.standard_error = np.array(self.estimator_model.bse[1 : (len(self.treatment_name) + 1)])[0]
self.degree_of_freedom = int(self.estimator_model.df_resid)
self.estimate = np.array(self.estimator_model.params[1 : (len(self.treatment_name) + 1)])[0]
self.t_stats = np.array(self.estimator_model.tvalues[self.treatment_name])[0]
# partial R^2 (r2yt_w) is the proportion of variation in outcome uniquely explained by treatment
partial_r2 = self.partial_r2_func(self.estimator_model, self.treatment_name)
RVq = self.robustness_value_func()
RV_qalpha = self.robustness_value_func(alpha=self.significance_level)
if self.confounder_increases_estimate:
self.null_hypothesis_effect = self.estimate * (1 - self.percent_change_estimate)
else:
self.null_hypothesis_effect = self.estimate * (1 + self.percent_change_estimate)
self.t_stats = (self.estimate - self.null_hypothesis_effect) / self.standard_error
self.partial_f2 = self.t_stats**2 / self.degree_of_freedom
# build a new regression model by considering treatment variables as outcome
treatment_linear_model = self.treatment_regression()
# r2twj_w is partial R^2 of covariate wj with treatment "t", after conditioning on covariates w(excluding wj)
# r2ywj_tw is partial R^2 of covariate wj with outcome "y", after conditioning on covariates w(excluding wj) and treatment "t"
self.r2twj_w = []
self.r2ywj_tw = []
for covariate in self.benchmark_common_causes:
self.r2ywj_tw.append(self.partial_r2_func(self.estimator_model, covariate))
self.r2twj_w.append(self.partial_r2_func(treatment_linear_model, covariate))
for i in range(len(self.benchmark_common_causes)):
r2twj_w = self.r2twj_w[i]
r2ywj_tw = self.r2ywj_tw[i]
# r2tu_w is the partial r^2 from regressing u on t after conditioning on w
self.r2tu_w = self.frac_strength_treatment * (r2twj_w / (1 - r2twj_w))
if any(val >= 1 for val in self.r2tu_w):
raise ValueError("r2tu_w can not be >= 1. Try a lower frac_strength_treatment value")
r2uwj_wt = (
self.frac_strength_treatment
* (r2twj_w**2)
/ ((1 - self.frac_strength_treatment * r2twj_w) * (1 - r2twj_w))
)
if any(val >= 1 for val in r2uwj_wt):
raise ValueError("r2uwj_wt can not be >= 1. Try a lower frac_strength_treatment value")
self.r2yu_tw = ((np.sqrt(self.frac_strength_outcome) + np.sqrt(r2uwj_wt)) / np.sqrt(1 - r2uwj_wt)) ** 2 * (
r2ywj_tw / (1 - r2ywj_tw)
)
if any(val > 1 for val in self.r2yu_tw):
for i in range(len(self.r2yu_tw)):
if self.r2yu_tw[i] > 1:
self.r2yu_tw[i] = 1
self.logger.warning(
"Warning: r2yu_tw can not be > 1. Try a lower frac_strength_treatment. Setting r2yu_tw to 1"
)
# Compute bias adjusted terms
self.benchmarking_results = self.compute_bias_adjusted(self.r2tu_w, self.r2yu_tw)
if plot == True:
self.plot()
self.stats = {
"estimate": self.estimate,
"standard_error": self.standard_error,
"degree of freedom": self.degree_of_freedom,
"t_statistic": self.t_stats,
"r2yt_w": partial_r2,
"partial_f2": self.partial_f2,
"robustness_value": RVq,
"robustness_value_alpha": RV_qalpha,
}
self.benchmarking_results = pd.DataFrame.from_dict(self.benchmarking_results)
return self
def plot_estimate(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting estimates.
Contour lines (z - axis) correspond to the adjusted estimate values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_estimate : threshold point
estimate_bounds : estimate values for unobserved confounders (bias adjusted estimates)
"""
critical_estimate = self.null_hypothesis_effect
contour_values = np.zeros((len(r2yu_tw), len(r2tu_w)))
for i in range(len(r2yu_tw)):
y = r2yu_tw[i]
for j in range(len(r2tu_w)):
x = r2tu_w[j]
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
estimate = benchmarking_results["bias_adjusted_estimate"]
contour_values[i][j] = estimate
estimate_bounds = self.benchmarking_results["bias_adjusted_estimate"]
return contour_values, critical_estimate, estimate_bounds
def plot_t(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting t.
Contour lines (z - axis) correspond to the adjusted t values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_t : threshold point
t_bounds : t-value for unobserved confounders (bias adjusted t values)
"""
t_alpha_df_1 = t.ppf(
self.significance_level / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
critical_t = abs(t_alpha_df_1) * np.sign(self.t_stats)
contour_values = []
for x in r2tu_w:
contour = []
for y in r2yu_tw:
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
t_value = benchmarking_results["bias_adjusted_t"]
contour.append(t_value)
contour_values.append(contour)
t_bounds = self.benchmarking_results["bias_adjusted_t"]
return contour_values, critical_t, t_bounds
def plot(
self,
plot_type="estimate",
critical_value=None,
x_limit=0.8,
y_limit=0.8,
num_points_per_contour=200,
plot_size=(7, 7),
contours_color="blue",
critical_contour_color="red",
label_fontsize=9,
contour_linewidths=0.75,
contour_linestyles="solid",
contours_label_color="black",
critical_label_color="red",
unadjusted_estimate_marker="D",
unadjusted_estimate_color="black",
adjusted_estimate_marker="^",
adjusted_estimate_color="red",
legend_position=(1.6, 0.6),
):
"""
Plots and summarizes the sensitivity bounds as a contour plot, as they vary with the partial R^2 of the unobserved confounder(s) with the treatment and the outcome
Two types of plots can be generated, based on adjusted estimates or adjusted t-values
X-axis: Partial R^2 of treatment and unobserved confounder(s)
Y-axis: Partial R^2 of outcome and unobserved confounder(s)
We also plot bounds on the partial R^2 of the unobserved confounders obtained from observed covariates
:param plot_type: "estimate" or "t-value"
:param critical_value: special reference value of the estimate or t-value that will be highlighted in the plot
:param x_limit: plot's maximum x_axis value (default = 0.8)
:param y_limit: plot's minimum y_axis value (default = 0.8)
:param num_points_per_contour: number of points to calculate and plot each contour line (default = 200)
:param plot_size: tuple denoting the size of the plot (default = (7,7))
:param contours_color: color of contour line (default = blue)
String or array. If array, lines will be plotted with the specific color in ascending order.
:param critical_contour_color: color of threshold line (default = red)
:param label_fontsize: fontsize for labelling contours (default = 9)
:param contour_linewidths: linewidths for contours (default = 0.75)
:param contour_linestyles: linestyles for contours (default = "solid")
See : https://matplotlib.org/3.5.0/gallery/lines_bars_and_markers/linestyles.html for more examples
:param contours_label_color: color of contour line label (default = black)
:param critical_label_color: color of threshold line label (default = red)
:param unadjusted_estimate_marker: marker type for unadjusted estimate in the plot (default = 'D')
See: https://matplotlib.org/stable/api/markers_api.html
:parm unadjusted_estimate_color: marker color for unadjusted estimate in the plot (default = "black")
:param adjusted_estimate_marker: marker type for bias adjusted estimates in the plot (default = '^')
:parm adjusted_estimate_color: marker color for bias adjusted estimates in the plot (default = "red")
:param legend_position:tuple denoting the position of the legend (default = (1.6, 0.6))
"""
# Plotting the contour plot
if plot_type == "estimate":
critical_value = 0 # default value of estimate
else:
critical_value = 2 # default t-value (usual approx for 95% CI)
fig, ax = plt.subplots(1, 1, figsize=plot_size)
ax.set_title("Sensitivity contour plot of %s" % plot_type)
ax.set_xlabel("Partial R^2 of confounder with treatment")
ax.set_ylabel("Partial R^2 of confounder with outcome")
for i in range(len(self.r2tu_w)):
x = self.r2tu_w[i]
y = self.r2yu_tw[i]
if x > 0.8 or y > 0.8:
x_limit = 0.99
y_limit = 0.99
break
r2tu_w = np.arange(0.0, x_limit, x_limit / num_points_per_contour)
r2yu_tw = np.arange(0.0, y_limit, y_limit / num_points_per_contour)
unadjusted_point_estimate = None
if plot_type == "estimate":
contour_values, critical_value, bound_values = self.plot_estimate(r2tu_w, r2yu_tw)
unadjusted_estimate = self.estimate
unadjusted_point_estimate = unadjusted_estimate
elif plot_type == "t-value":
contour_values, critical_value, bound_values = self.plot_t(r2tu_w, r2yu_tw)
unadjusted_t = self.t_stats
unadjusted_point_estimate = unadjusted_t
else:
raise ValueError("Current plotting method only supports 'estimate' and 't-value' ")
# Adding contours
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=contours_color,
linewidths=contour_linewidths,
linestyles=contour_linestyles,
)
ax.clabel(contour_plot, inline=1, fontsize=label_fontsize, colors=contours_label_color)
# Adding threshold contour line
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=critical_contour_color,
linewidths=contour_linewidths,
levels=[critical_value],
)
ax.clabel(contour_plot, [critical_value], inline=1, fontsize=label_fontsize, colors=critical_label_color)
# Adding unadjusted point estimate
ax.scatter(
[0],
[0],
marker=unadjusted_estimate_marker,
color=unadjusted_estimate_color,
label="Unadjusted({:1.2f})".format(unadjusted_point_estimate),
)
# Adding bounds to partial R^2 values for given strength of confounders
for i in range(len(self.frac_strength_treatment)):
frac_strength_treatment = self.frac_strength_treatment[i]
frac_strength_outcome = self.frac_strength_outcome[i]
if frac_strength_treatment == frac_strength_outcome:
signs = str(round(frac_strength_treatment, 2))
else:
signs = str(round(frac_strength_treatment, 2)) + "/" + str(round(frac_strength_outcome, 2))
label = (
str(i + 1)
+ " "
+ signs
+ " X "
+ str(self.original_benchmark_covariates)
+ " ({:1.2f}) ".format(bound_values[i])
)
ax.scatter(
self.r2tu_w[i],
self.r2yu_tw[i],
color=adjusted_estimate_color,
marker=adjusted_estimate_marker,
label=label,
)
ax.annotate(str(i + 1), (self.r2tu_w[i] + 0.005, self.r2yu_tw[i] + 0.005))
ax.legend(bbox_to_anchor=legend_position)
plt.show()
def __str__(self):
s = "Sensitivity Analysis to Unobserved Confounding using R^2 paramterization\n\n"
s += "Unadjusted Estimates of Treatment {0} :\n".format(self.original_treatment_name)
s += "Coefficient Estimate : {0}\n".format(self.estimate)
s += "Degree of Freedom : {0}\n".format(self.degree_of_freedom)
s += "Standard Error : {0}\n".format(self.standard_error)
s += "t-value : {0}\n".format(self.t_stats)
s += "F^2 value : {0}\n\n".format(self.partial_f2)
s += "Sensitivity Statistics : \n"
s += "Partial R2 of treatment with outcome : {0}\n".format(self.stats["r2yt_w"])
s += "Robustness Value : {0}\n\n".format(self.stats["robustness_value"])
s += "Interpretation of results :\n"
s += "Any confounder explaining less than {0}% percent of the residual variance of both the treatment and the outcome would not be strong enough to explain away the observed effect i.e bring down the estimate to 0 \n\n".format(
round(self.stats["robustness_value"] * 100, 2)
)
s += "For a significance level of {0}%, any confounder explaining more than {1}% percent of the residual variance of both the treatment and the outcome would be strong enough to make the estimated effect not 'statistically significant'\n\n".format(
self.significance_level * 100, round(self.stats["robustness_value_alpha"] * 100, 2)
)
s += "If confounders explained 100% of the residual variance of the outcome, they would need to explain at least {0}% of the residual variance of the treatment to bring down the estimated effect to 0\n".format(
round(self.stats["r2yt_w"] * 100, 2)
)
return s
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | should this be y=r2yu_tw? | amit-sharma | 382 |
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/linear_sensitivity_analyzer.py | import logging
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy.stats import t
from dowhy.utils.api import parse_state
class LinearSensitivityAnalyzer:
"""
Class to perform sensitivity analysis
See: https://carloscinelli.com/files/Cinelli%20and%20Hazlett%20(2020)%20-%20Making%20Sense%20of%20Sensitivity.pdf
:param estimator: linear estimator of the causal model
:param data: Pandas dataframe
:param treatment_name: name of treatment
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = True)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param frac_strength_treatment: strength of association between unobserved confounder and treatment compared to benchmark covariate
:param frac_strength_outcome: strength of association between unobserved confounder and outcome compared to benchmark covariate
:param common_causes_order: The order of column names in OLS regression data
"""
def __init__(
self,
estimator=None,
data=None,
treatment_name=None,
percent_change_estimate=1.0,
significance_level=0.05,
confounder_increases_estimate=True,
benchmark_common_causes=None,
null_hypothesis_effect=0,
frac_strength_treatment=None,
frac_strength_outcome=None,
common_causes_order=None,
):
self.data = data
self.treatment_name = []
# original_treatment_name: : stores original variable names for labelling
self.original_treatment_name = treatment_name
for t in range(len(treatment_name)):
self.treatment_name.append("x" + str(t + 1))
self.percent_change_estimate = percent_change_estimate
self.significance_level = significance_level
self.confounder_increases_estimate = confounder_increases_estimate
self.estimator = estimator
self.estimator_model = estimator.model
self.null_hypothesis_effect = null_hypothesis_effect
# common_causes_map : maps the original variable names to variable names in OLS regression
self.common_causes_map = {}
for i in range(len(common_causes_order)):
self.common_causes_map[common_causes_order[i]] = "x" + str(len(self.treatment_name) + i + 1)
# benchmark_common_causes: stores variable names in terms of regression model variables
benchmark_common_causes = parse_state(benchmark_common_causes)
self.benchmark_common_causes = []
# original_benchmark_covariates: stores original variable names for labelling
self.original_benchmark_covariates = benchmark_common_causes
for i in range(len(benchmark_common_causes)):
self.benchmark_common_causes.append(self.common_causes_map[benchmark_common_causes[i]])
if type(frac_strength_treatment) in [int, list, float]:
self.frac_strength_treatment = np.array(frac_strength_treatment)
if type(frac_strength_outcome) in [int, list, float]:
self.frac_strength_outcome = np.array(frac_strength_outcome)
# estimate: estimate of regression
self.estimate = None
# degree_of_freedom: degree of freedom of error in regression
self.degree_of_freedom = None
# standard_error: standard error in regression
self.standard_error = None
# t_stats: Treatment coefficient t-value - measures how many standard errors the estimate is away from zero.
self.t_stats = None
# partial_f2: value to determine if a regression model and a nested version of it have a statistically significant difference between them
self.partial_f2 = None
# r2tu_w: partial R^2 of unobserved confounder "u" with treatment "t", after conditioning on observed covariates "w"
self.r2tu_w = None
# r2yu_tw: partial R^2 of unobserved confounder "u" with outcome "y", after conditioning on observed covariates "w" and treatment "t"
self.r2yu_tw = None
# r2twj_w: partial R^2 of observed covariate wj with treatment "t", after conditioning on observed covariates "w" excluding wj
self.r2twj_w = None
# r2ywj_tw: partial R^2 of observed covariate wj with outcome "y", after conditioning on observed covariates "w" (excluding wj) and treatment "t"
self.r2ywj_tw = None
# benchmarking_results: dataframe containing information about bounds and bias adjusted terms
self.benchmarking_results = None
# stats: dictionary containing information like robustness value, partial R^2, estimate, standard error , degree of freedom, partial f^2, t-statistic
self.stats = None
self.logger = logging.getLogger(__name__)
def treatment_regression(self):
"""
Function to perform regression with treatment as outcome
:returns: new OLS regression model
"""
features = self.estimator._observed_common_causes.copy()
treatment_df = self.estimator._treatment.copy()
features = sm.tools.add_constant(features)
features.rename(columns=self.common_causes_map, inplace=True)
model = sm.OLS(treatment_df, features)
estimator_model = model.fit()
return estimator_model
def partial_r2_func(self, estimator_model=None, treatment=None):
"""
Computes the partial R^2 of regression model
:param estimator_model: Linear regression model
:param treatment: treatment name
:returns: partial R^2 value
"""
estimate = estimator_model.params[treatment]
degree_of_freedom = int(estimator_model.df_resid)
if np.isscalar(estimate): # for single covariate
t_stats = estimator_model.tvalues[treatment]
return t_stats**2 / (t_stats**2 + degree_of_freedom)
else: # compute for a group of covariates
covariance_matrix = estimator_model.cov_params().loc[treatment, :][treatment]
n = len(estimate) # number of parameters in model
f_stat = (
np.matmul(np.matmul(estimate.values.T, np.linalg.inv(covariance_matrix.values)), estimate.values) / n
)
return f_stat * n / (f_stat * n + degree_of_freedom)
def robustness_value_func(self, alpha=1.0):
"""
Function to calculate the robustness value.
It is the minimum strength of association that confounders must have with treatment and outcome to change conclusions.
Robustness value describes how strong the association must be in order to reduce the estimated effect by (100 * percent_change_estimate)%.
Robustness value close to 1 means the treatment effect can handle strong confounders explaining almost all residual variation of the treatment and the outcome.
Robustness value close to 0 means that even very weak confounders can also change the results.
:param alpha: confidence interval (default = 1)
:returns: robustness value
"""
partial_cohen_f = abs(
self.t_stats / np.sqrt(self.degree_of_freedom)
) # partial f of treatment t with outcome y. f = t_val/sqrt(dof)
f_q = self.percent_change_estimate * partial_cohen_f
t_alpha_df_1 = t.ppf(
alpha / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
f_critical = abs(t_alpha_df_1) / np.sqrt(self.degree_of_freedom - 1)
f_adjusted = f_q - f_critical
if f_adjusted < 0:
r_value = 0
else:
r_value = 0.5 * (np.sqrt(f_adjusted**4 + (4 * f_adjusted**2)) - f_adjusted**2)
if f_adjusted > 0 and f_q > 1 / f_critical:
r_value = (f_q**2 - f_critical**2) / (1 + f_q**2)
return r_value
def compute_bias_adjusted(self, r2tu_w, r2yu_tw):
"""
Computes the bias adjusted estimate, standard error, t-value, partial R2, confidence intervals
:param r2tu_w: partial r^2 from regressing unobserved confounder u on treatment t after conditioning on observed covariates w
:param r2yu_tw: partial r^2 from regressing unobserved confounder u on outcome y after conditioning on observed covariates w and treatment t
:returns: Python dictionary with information about partial R^2 of confounders with treatment and outcome and bias adjusted variables
"""
bias_factor = np.sqrt((r2yu_tw * r2tu_w) / (1 - r2tu_w))
bias = bias_factor * (self.standard_error * np.sqrt(self.degree_of_freedom))
if self.confounder_increases_estimate:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) - bias)
else:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) + bias)
bias_adjusted_se = (
np.sqrt((1 - r2yu_tw) / (1 - r2tu_w))
* self.standard_error
* np.sqrt(self.degree_of_freedom / (self.degree_of_freedom - 1))
)
bias_adjusted_t = (bias_adjusted_estimate - self.null_hypothesis_effect) / bias_adjusted_se
bias_adjusted_partial_r2 = bias_adjusted_t**2 / (
bias_adjusted_t**2 + (self.degree_of_freedom - 1)
) # partial r2 formula used with new t value and dof - 1
num_se = t.ppf(
self.significance_level / 2, self.degree_of_freedom
) # Number of standard errors within Confidence Interval
bias_adjusted_upper_CI = bias_adjusted_estimate - num_se * bias_adjusted_se
bias_adjusted_lower_CI = bias_adjusted_estimate + num_se * bias_adjusted_se
benchmarking_results = {
"r2tu_w": r2tu_w,
"r2yu_tw": r2yu_tw,
"bias_adjusted_estimate": bias_adjusted_estimate,
"bias_adjusted_se": bias_adjusted_se,
"bias_adjusted_t": bias_adjusted_t,
"bias_adjusted_lower_CI": bias_adjusted_lower_CI,
"bias_adjusted_upper_CI": bias_adjusted_upper_CI,
}
return benchmarking_results
def check_sensitivity(self, plot=True):
"""
Function to perform sensitivity analysis.
:param plot: plot = True generates a plot of point estimate and the variations with respect to unobserved confounding.
plot = False overrides the setting
:returns: instance of LinearSensitivityAnalyzer class
"""
self.standard_error = np.array(self.estimator_model.bse[1 : (len(self.treatment_name) + 1)])[0]
self.degree_of_freedom = int(self.estimator_model.df_resid)
self.estimate = np.array(self.estimator_model.params[1 : (len(self.treatment_name) + 1)])[0]
self.t_stats = np.array(self.estimator_model.tvalues[self.treatment_name])[0]
# partial R^2 (r2yt_w) is the proportion of variation in outcome uniquely explained by treatment
partial_r2 = self.partial_r2_func(self.estimator_model, self.treatment_name)
RVq = self.robustness_value_func()
RV_qalpha = self.robustness_value_func(alpha=self.significance_level)
if self.confounder_increases_estimate:
self.null_hypothesis_effect = self.estimate * (1 - self.percent_change_estimate)
else:
self.null_hypothesis_effect = self.estimate * (1 + self.percent_change_estimate)
self.t_stats = (self.estimate - self.null_hypothesis_effect) / self.standard_error
self.partial_f2 = self.t_stats**2 / self.degree_of_freedom
# build a new regression model by considering treatment variables as outcome
treatment_linear_model = self.treatment_regression()
# r2twj_w is partial R^2 of covariate wj with treatment "t", after conditioning on covariates w(excluding wj)
# r2ywj_tw is partial R^2 of covariate wj with outcome "y", after conditioning on covariates w(excluding wj) and treatment "t"
self.r2twj_w = []
self.r2ywj_tw = []
for covariate in self.benchmark_common_causes:
self.r2ywj_tw.append(self.partial_r2_func(self.estimator_model, covariate))
self.r2twj_w.append(self.partial_r2_func(treatment_linear_model, covariate))
for i in range(len(self.benchmark_common_causes)):
r2twj_w = self.r2twj_w[i]
r2ywj_tw = self.r2ywj_tw[i]
# r2tu_w is the partial r^2 from regressing u on t after conditioning on w
self.r2tu_w = self.frac_strength_treatment * (r2twj_w / (1 - r2twj_w))
if any(val >= 1 for val in self.r2tu_w):
raise ValueError("r2tu_w can not be >= 1. Try a lower frac_strength_treatment value")
r2uwj_wt = (
self.frac_strength_treatment
* (r2twj_w**2)
/ ((1 - self.frac_strength_treatment * r2twj_w) * (1 - r2twj_w))
)
if any(val >= 1 for val in r2uwj_wt):
raise ValueError("r2uwj_wt can not be >= 1. Try a lower frac_strength_treatment value")
self.r2yu_tw = ((np.sqrt(self.frac_strength_outcome) + np.sqrt(r2uwj_wt)) / np.sqrt(1 - r2uwj_wt)) ** 2 * (
r2ywj_tw / (1 - r2ywj_tw)
)
if any(val > 1 for val in self.r2yu_tw):
for i in range(len(self.r2yu_tw)):
if self.r2yu_tw[i] > 1:
self.r2yu_tw[i] = 1
self.logger.warning(
"Warning: r2yu_tw can not be > 1. Try a lower frac_strength_treatment. Setting r2yu_tw to 1"
)
# Compute bias adjusted terms
self.benchmarking_results = self.compute_bias_adjusted(self.r2tu_w, self.r2yu_tw)
if plot == True:
self.plot()
self.stats = {
"estimate": self.estimate,
"standard_error": self.standard_error,
"degree of freedom": self.degree_of_freedom,
"t_statistic": self.t_stats,
"r2yt_w": partial_r2,
"partial_f2": self.partial_f2,
"robustness_value": RVq,
"robustness_value_alpha": RV_qalpha,
}
self.benchmarking_results = pd.DataFrame.from_dict(self.benchmarking_results)
return self
def plot_estimate(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting estimates.
Contour lines (z - axis) correspond to the adjusted estimate values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_estimate : threshold point
estimate_bounds : estimate values for unobserved confounders (bias adjusted estimates)
"""
critical_estimate = self.null_hypothesis_effect
contour_values = np.zeros((len(r2yu_tw), len(r2tu_w)))
for i in range(len(r2yu_tw)):
y = r2tu_w[i]
for j in range(len(r2tu_w)):
x = r2yu_tw[j]
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
estimate = benchmarking_results["bias_adjusted_estimate"]
contour_values[i][j] = estimate
estimate_bounds = self.benchmarking_results["bias_adjusted_estimate"]
return contour_values, critical_estimate, estimate_bounds
def plot_t(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting t.
Contour lines (z - axis) correspond to the adjusted t values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_t : threshold point
t_bounds : t-value for unobserved confounders (bias adjusted t values)
"""
t_alpha_df_1 = t.ppf(
self.significance_level / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
critical_t = abs(t_alpha_df_1) * np.sign(self.t_stats)
contour_values = []
for x in r2tu_w:
contour = []
for y in r2yu_tw:
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
t_value = benchmarking_results["bias_adjusted_t"]
contour.append(t_value)
contour_values.append(contour)
t_bounds = self.benchmarking_results["bias_adjusted_t"]
return contour_values, critical_t, t_bounds
def plot(
self,
plot_type="estimate",
critical_value=None,
x_limit=0.8,
y_limit=0.8,
num_points_per_contour=200,
plot_size=(7, 7),
contours_color="blue",
critical_contour_color="red",
label_fontsize=9,
contour_linewidths=0.75,
contour_linestyles="solid",
contours_label_color="black",
critical_label_color="red",
unadjusted_estimate_marker="D",
unadjusted_estimate_color="black",
adjusted_estimate_marker="^",
adjusted_estimate_color="red",
legend_position=(1.6, 0.6),
):
"""
Plots and summarizes the sensitivity bounds as a contour plot, as they vary with the partial R^2 of the unobserved confounder(s) with the treatment and the outcome
Two types of plots can be generated, based on adjusted estimates or adjusted t-values
X-axis: Partial R^2 of treatment and unobserved confounder(s)
Y-axis: Partial R^2 of outcome and unobserved confounder(s)
We also plot bounds on the partial R^2 of the unobserved confounders obtained from observed covariates
:param plot_type: "estimate" or "t-value"
:param critical_value: special reference value of the estimate or t-value that will be highlighted in the plot
:param x_limit: plot's maximum x_axis value (default = 0.8)
:param y_limit: plot's minimum y_axis value (default = 0.8)
:param num_points_per_contour: number of points to calculate and plot each contour line (default = 200)
:param plot_size: tuple denoting the size of the plot (default = (7,7))
:param contours_color: color of contour line (default = blue)
String or array. If array, lines will be plotted with the specific color in ascending order.
:param critical_contour_color: color of threshold line (default = red)
:param label_fontsize: fontsize for labelling contours (default = 9)
:param contour_linewidths: linewidths for contours (default = 0.75)
:param contour_linestyles: linestyles for contours (default = "solid")
See : https://matplotlib.org/3.5.0/gallery/lines_bars_and_markers/linestyles.html for more examples
:param contours_label_color: color of contour line label (default = black)
:param critical_label_color: color of threshold line label (default = red)
:param unadjusted_estimate_marker: marker type for unadjusted estimate in the plot (default = 'D')
See: https://matplotlib.org/stable/api/markers_api.html
:parm unadjusted_estimate_color: marker color for unadjusted estimate in the plot (default = "black")
:param adjusted_estimate_marker: marker type for bias adjusted estimates in the plot (default = '^')
:parm adjusted_estimate_color: marker color for bias adjusted estimates in the plot (default = "red")
:param legend_position:tuple denoting the position of the legend (default = (1.6, 0.6))
"""
# Plotting the contour plot
if plot_type == "estimate":
critical_value = 0 # default value of estimate
else:
critical_value = 2 # default t-value (usual approx for 95% CI)
fig, ax = plt.subplots(1, 1, figsize=plot_size)
ax.set_title("Sensitivity contour plot of %s" % plot_type)
ax.set_xlabel("Partial R^2 of confounder with treatment")
ax.set_ylabel("Partial R^2 of confounder with outcome")
for i in range(len(self.r2tu_w)):
x = self.r2tu_w[i]
y = self.r2yu_tw[i]
if x > 0.8 or y > 0.8:
x_limit = 0.99
y_limit = 0.99
break
r2tu_w = np.arange(0.0, x_limit, x_limit / num_points_per_contour)
r2yu_tw = np.arange(0.0, y_limit, y_limit / num_points_per_contour)
unadjusted_point_estimate = None
if plot_type == "estimate":
contour_values, critical_value, bound_values = self.plot_estimate(r2tu_w, r2yu_tw)
unadjusted_estimate = self.estimate
unadjusted_point_estimate = unadjusted_estimate
elif plot_type == "t-value":
contour_values, critical_value, bound_values = self.plot_t(r2tu_w, r2yu_tw)
unadjusted_t = self.t_stats
unadjusted_point_estimate = unadjusted_t
else:
raise ValueError("Current plotting method only supports 'estimate' and 't-value' ")
# Adding contours
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=contours_color,
linewidths=contour_linewidths,
linestyles=contour_linestyles,
)
ax.clabel(contour_plot, inline=1, fontsize=label_fontsize, colors=contours_label_color)
# Adding threshold contour line
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=critical_contour_color,
linewidths=contour_linewidths,
levels=[critical_value],
)
ax.clabel(contour_plot, [critical_value], inline=1, fontsize=label_fontsize, colors=critical_label_color)
# Adding unadjusted point estimate
ax.scatter(
[0],
[0],
marker=unadjusted_estimate_marker,
color=unadjusted_estimate_color,
label="Unadjusted({:1.2f})".format(unadjusted_point_estimate),
)
# Adding bounds to partial R^2 values for given strength of confounders
for i in range(len(self.frac_strength_treatment)):
frac_strength_treatment = self.frac_strength_treatment[i]
frac_strength_outcome = self.frac_strength_outcome[i]
if frac_strength_treatment == frac_strength_outcome:
signs = str(round(frac_strength_treatment, 2))
else:
signs = str(round(frac_strength_treatment, 2)) + "/" + str(round(frac_strength_outcome, 2))
label = (
str(i + 1)
+ " "
+ signs
+ " X "
+ str(self.original_benchmark_covariates)
+ " ({:1.2f}) ".format(bound_values[i])
)
ax.scatter(
self.r2tu_w[i],
self.r2yu_tw[i],
color=adjusted_estimate_color,
marker=adjusted_estimate_marker,
label=label,
)
ax.annotate(str(i + 1), (self.r2tu_w[i] + 0.005, self.r2yu_tw[i] + 0.005))
ax.legend(bbox_to_anchor=legend_position)
plt.show()
def __str__(self):
s = "Sensitivity Analysis to Unobserved Confounding using R^2 paramterization\n\n"
s += "Unadjusted Estimates of Treatment {0} :\n".format(self.original_treatment_name)
s += "Coefficient Estimate : {0}\n".format(self.estimate)
s += "Degree of Freedom : {0}\n".format(self.degree_of_freedom)
s += "Standard Error : {0}\n".format(self.standard_error)
s += "t-value : {0}\n".format(self.t_stats)
s += "F^2 value : {0}\n\n".format(self.partial_f2)
s += "Sensitivity Statistics : \n"
s += "Partial R2 of treatment with outcome : {0}\n".format(self.stats["r2yt_w"])
s += "Robustness Value : {0}\n\n".format(self.stats["robustness_value"])
s += "Interpretation of results :\n"
s += "Any confounder explaining less than {0}% percent of the residual variance of both the treatment and the outcome would not be strong enough to explain away the observed effect i.e bring down the estimate to 0 \n\n".format(
round(self.stats["robustness_value"] * 100, 2)
)
s += "For a significance level of {0}%, any confounder explaining more than {1}% percent of the residual variance of both the treatment and the outcome would be strong enough to make the estimated effect not 'statistically significant'\n\n".format(
self.significance_level * 100, round(self.stats["robustness_value_alpha"] * 100, 2)
)
s += "If confounders explained 100% of the residual variance of the outcome, they would need to explain at least {0}% of the residual variance of the treatment to bring down the estimated effect to 0\n".format(
round(self.stats["r2yt_w"] * 100, 2)
)
return s
| import logging
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy.stats import t
from dowhy.utils.api import parse_state
class LinearSensitivityAnalyzer:
"""
Class to perform sensitivity analysis
See: https://carloscinelli.com/files/Cinelli%20and%20Hazlett%20(2020)%20-%20Making%20Sense%20of%20Sensitivity.pdf
:param estimator: linear estimator of the causal model
:param data: Pandas dataframe
:param treatment_name: name of treatment
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = True)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param frac_strength_treatment: strength of association between unobserved confounder and treatment compared to benchmark covariate
:param frac_strength_outcome: strength of association between unobserved confounder and outcome compared to benchmark covariate
:param common_causes_order: The order of column names in OLS regression data
"""
def __init__(
self,
estimator=None,
data=None,
treatment_name=None,
percent_change_estimate=1.0,
significance_level=0.05,
confounder_increases_estimate=True,
benchmark_common_causes=None,
null_hypothesis_effect=0,
frac_strength_treatment=None,
frac_strength_outcome=None,
common_causes_order=None,
):
self.data = data
self.treatment_name = []
# original_treatment_name: : stores original variable names for labelling
self.original_treatment_name = treatment_name
for t in range(len(treatment_name)):
self.treatment_name.append("x" + str(t + 1))
self.percent_change_estimate = percent_change_estimate
self.significance_level = significance_level
self.confounder_increases_estimate = confounder_increases_estimate
self.estimator = estimator
self.estimator_model = estimator.model
self.null_hypothesis_effect = null_hypothesis_effect
# common_causes_map : maps the original variable names to variable names in OLS regression
self.common_causes_map = {}
for i in range(len(common_causes_order)):
self.common_causes_map[common_causes_order[i]] = "x" + str(len(self.treatment_name) + i + 1)
# benchmark_common_causes: stores variable names in terms of regression model variables
benchmark_common_causes = parse_state(benchmark_common_causes)
self.benchmark_common_causes = []
# original_benchmark_covariates: stores original variable names for labelling
self.original_benchmark_covariates = benchmark_common_causes
for i in range(len(benchmark_common_causes)):
self.benchmark_common_causes.append(self.common_causes_map[benchmark_common_causes[i]])
if type(frac_strength_treatment) in [int, list, float]:
self.frac_strength_treatment = np.array(frac_strength_treatment)
if type(frac_strength_outcome) in [int, list, float]:
self.frac_strength_outcome = np.array(frac_strength_outcome)
# estimate: estimate of regression
self.estimate = None
# degree_of_freedom: degree of freedom of error in regression
self.degree_of_freedom = None
# standard_error: standard error in regression
self.standard_error = None
# t_stats: Treatment coefficient t-value - measures how many standard errors the estimate is away from zero.
self.t_stats = None
# partial_f2: value to determine if a regression model and a nested version of it have a statistically significant difference between them
self.partial_f2 = None
# r2tu_w: partial R^2 of unobserved confounder "u" with treatment "t", after conditioning on observed covariates "w"
self.r2tu_w = None
# r2yu_tw: partial R^2 of unobserved confounder "u" with outcome "y", after conditioning on observed covariates "w" and treatment "t"
self.r2yu_tw = None
# r2twj_w: partial R^2 of observed covariate wj with treatment "t", after conditioning on observed covariates "w" excluding wj
self.r2twj_w = None
# r2ywj_tw: partial R^2 of observed covariate wj with outcome "y", after conditioning on observed covariates "w" (excluding wj) and treatment "t"
self.r2ywj_tw = None
# benchmarking_results: dataframe containing information about bounds and bias adjusted terms
self.benchmarking_results = None
# stats: dictionary containing information like robustness value, partial R^2, estimate, standard error , degree of freedom, partial f^2, t-statistic
self.stats = None
self.logger = logging.getLogger(__name__)
def treatment_regression(self):
"""
Function to perform regression with treatment as outcome
:returns: new OLS regression model
"""
features = self.estimator._observed_common_causes.copy()
treatment_df = self.estimator._treatment.copy()
features = sm.tools.add_constant(features)
features.rename(columns=self.common_causes_map, inplace=True)
model = sm.OLS(treatment_df, features)
estimator_model = model.fit()
return estimator_model
def partial_r2_func(self, estimator_model=None, treatment=None):
"""
Computes the partial R^2 of regression model
:param estimator_model: Linear regression model
:param treatment: treatment name
:returns: partial R^2 value
"""
estimate = estimator_model.params[treatment]
degree_of_freedom = int(estimator_model.df_resid)
if np.isscalar(estimate): # for single covariate
t_stats = estimator_model.tvalues[treatment]
return t_stats**2 / (t_stats**2 + degree_of_freedom)
else: # compute for a group of covariates
covariance_matrix = estimator_model.cov_params().loc[treatment, :][treatment]
n = len(estimate) # number of parameters in model
f_stat = (
np.matmul(np.matmul(estimate.values.T, np.linalg.inv(covariance_matrix.values)), estimate.values) / n
)
return f_stat * n / (f_stat * n + degree_of_freedom)
def robustness_value_func(self, alpha=1.0):
"""
Function to calculate the robustness value.
It is the minimum strength of association that confounders must have with treatment and outcome to change conclusions.
Robustness value describes how strong the association must be in order to reduce the estimated effect by (100 * percent_change_estimate)%.
Robustness value close to 1 means the treatment effect can handle strong confounders explaining almost all residual variation of the treatment and the outcome.
Robustness value close to 0 means that even very weak confounders can also change the results.
:param alpha: confidence interval (default = 1)
:returns: robustness value
"""
partial_cohen_f = abs(
self.t_stats / np.sqrt(self.degree_of_freedom)
) # partial f of treatment t with outcome y. f = t_val/sqrt(dof)
f_q = self.percent_change_estimate * partial_cohen_f
t_alpha_df_1 = t.ppf(
alpha / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
f_critical = abs(t_alpha_df_1) / np.sqrt(self.degree_of_freedom - 1)
f_adjusted = f_q - f_critical
if f_adjusted < 0:
r_value = 0
else:
r_value = 0.5 * (np.sqrt(f_adjusted**4 + (4 * f_adjusted**2)) - f_adjusted**2)
if f_adjusted > 0 and f_q > 1 / f_critical:
r_value = (f_q**2 - f_critical**2) / (1 + f_q**2)
return r_value
def compute_bias_adjusted(self, r2tu_w, r2yu_tw):
"""
Computes the bias adjusted estimate, standard error, t-value, partial R2, confidence intervals
:param r2tu_w: partial r^2 from regressing unobserved confounder u on treatment t after conditioning on observed covariates w
:param r2yu_tw: partial r^2 from regressing unobserved confounder u on outcome y after conditioning on observed covariates w and treatment t
:returns: Python dictionary with information about partial R^2 of confounders with treatment and outcome and bias adjusted variables
"""
bias_factor = np.sqrt((r2yu_tw * r2tu_w) / (1 - r2tu_w))
bias = bias_factor * (self.standard_error * np.sqrt(self.degree_of_freedom))
if self.confounder_increases_estimate:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) - bias)
else:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) + bias)
bias_adjusted_se = (
np.sqrt((1 - r2yu_tw) / (1 - r2tu_w))
* self.standard_error
* np.sqrt(self.degree_of_freedom / (self.degree_of_freedom - 1))
)
bias_adjusted_t = (bias_adjusted_estimate - self.null_hypothesis_effect) / bias_adjusted_se
bias_adjusted_partial_r2 = bias_adjusted_t**2 / (
bias_adjusted_t**2 + (self.degree_of_freedom - 1)
) # partial r2 formula used with new t value and dof - 1
num_se = t.ppf(
self.significance_level / 2, self.degree_of_freedom
) # Number of standard errors within Confidence Interval
bias_adjusted_upper_CI = bias_adjusted_estimate - num_se * bias_adjusted_se
bias_adjusted_lower_CI = bias_adjusted_estimate + num_se * bias_adjusted_se
benchmarking_results = {
"r2tu_w": r2tu_w,
"r2yu_tw": r2yu_tw,
"bias_adjusted_estimate": bias_adjusted_estimate,
"bias_adjusted_se": bias_adjusted_se,
"bias_adjusted_t": bias_adjusted_t,
"bias_adjusted_lower_CI": bias_adjusted_lower_CI,
"bias_adjusted_upper_CI": bias_adjusted_upper_CI,
}
return benchmarking_results
def check_sensitivity(self, plot=True):
"""
Function to perform sensitivity analysis.
:param plot: plot = True generates a plot of point estimate and the variations with respect to unobserved confounding.
plot = False overrides the setting
:returns: instance of LinearSensitivityAnalyzer class
"""
self.standard_error = np.array(self.estimator_model.bse[1 : (len(self.treatment_name) + 1)])[0]
self.degree_of_freedom = int(self.estimator_model.df_resid)
self.estimate = np.array(self.estimator_model.params[1 : (len(self.treatment_name) + 1)])[0]
self.t_stats = np.array(self.estimator_model.tvalues[self.treatment_name])[0]
# partial R^2 (r2yt_w) is the proportion of variation in outcome uniquely explained by treatment
partial_r2 = self.partial_r2_func(self.estimator_model, self.treatment_name)
RVq = self.robustness_value_func()
RV_qalpha = self.robustness_value_func(alpha=self.significance_level)
if self.confounder_increases_estimate:
self.null_hypothesis_effect = self.estimate * (1 - self.percent_change_estimate)
else:
self.null_hypothesis_effect = self.estimate * (1 + self.percent_change_estimate)
self.t_stats = (self.estimate - self.null_hypothesis_effect) / self.standard_error
self.partial_f2 = self.t_stats**2 / self.degree_of_freedom
# build a new regression model by considering treatment variables as outcome
treatment_linear_model = self.treatment_regression()
# r2twj_w is partial R^2 of covariate wj with treatment "t", after conditioning on covariates w(excluding wj)
# r2ywj_tw is partial R^2 of covariate wj with outcome "y", after conditioning on covariates w(excluding wj) and treatment "t"
self.r2twj_w = []
self.r2ywj_tw = []
for covariate in self.benchmark_common_causes:
self.r2ywj_tw.append(self.partial_r2_func(self.estimator_model, covariate))
self.r2twj_w.append(self.partial_r2_func(treatment_linear_model, covariate))
for i in range(len(self.benchmark_common_causes)):
r2twj_w = self.r2twj_w[i]
r2ywj_tw = self.r2ywj_tw[i]
# r2tu_w is the partial r^2 from regressing u on t after conditioning on w
self.r2tu_w = self.frac_strength_treatment * (r2twj_w / (1 - r2twj_w))
if any(val >= 1 for val in self.r2tu_w):
raise ValueError("r2tu_w can not be >= 1. Try a lower frac_strength_treatment value")
r2uwj_wt = (
self.frac_strength_treatment
* (r2twj_w**2)
/ ((1 - self.frac_strength_treatment * r2twj_w) * (1 - r2twj_w))
)
if any(val >= 1 for val in r2uwj_wt):
raise ValueError("r2uwj_wt can not be >= 1. Try a lower frac_strength_treatment value")
self.r2yu_tw = ((np.sqrt(self.frac_strength_outcome) + np.sqrt(r2uwj_wt)) / np.sqrt(1 - r2uwj_wt)) ** 2 * (
r2ywj_tw / (1 - r2ywj_tw)
)
if any(val > 1 for val in self.r2yu_tw):
for i in range(len(self.r2yu_tw)):
if self.r2yu_tw[i] > 1:
self.r2yu_tw[i] = 1
self.logger.warning(
"Warning: r2yu_tw can not be > 1. Try a lower frac_strength_treatment. Setting r2yu_tw to 1"
)
# Compute bias adjusted terms
self.benchmarking_results = self.compute_bias_adjusted(self.r2tu_w, self.r2yu_tw)
if plot == True:
self.plot()
self.stats = {
"estimate": self.estimate,
"standard_error": self.standard_error,
"degree of freedom": self.degree_of_freedom,
"t_statistic": self.t_stats,
"r2yt_w": partial_r2,
"partial_f2": self.partial_f2,
"robustness_value": RVq,
"robustness_value_alpha": RV_qalpha,
}
self.benchmarking_results = pd.DataFrame.from_dict(self.benchmarking_results)
return self
def plot_estimate(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting estimates.
Contour lines (z - axis) correspond to the adjusted estimate values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_estimate : threshold point
estimate_bounds : estimate values for unobserved confounders (bias adjusted estimates)
"""
critical_estimate = self.null_hypothesis_effect
contour_values = np.zeros((len(r2yu_tw), len(r2tu_w)))
for i in range(len(r2yu_tw)):
y = r2yu_tw[i]
for j in range(len(r2tu_w)):
x = r2tu_w[j]
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
estimate = benchmarking_results["bias_adjusted_estimate"]
contour_values[i][j] = estimate
estimate_bounds = self.benchmarking_results["bias_adjusted_estimate"]
return contour_values, critical_estimate, estimate_bounds
def plot_t(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting t.
Contour lines (z - axis) correspond to the adjusted t values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_t : threshold point
t_bounds : t-value for unobserved confounders (bias adjusted t values)
"""
t_alpha_df_1 = t.ppf(
self.significance_level / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
critical_t = abs(t_alpha_df_1) * np.sign(self.t_stats)
contour_values = []
for x in r2tu_w:
contour = []
for y in r2yu_tw:
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
t_value = benchmarking_results["bias_adjusted_t"]
contour.append(t_value)
contour_values.append(contour)
t_bounds = self.benchmarking_results["bias_adjusted_t"]
return contour_values, critical_t, t_bounds
def plot(
self,
plot_type="estimate",
critical_value=None,
x_limit=0.8,
y_limit=0.8,
num_points_per_contour=200,
plot_size=(7, 7),
contours_color="blue",
critical_contour_color="red",
label_fontsize=9,
contour_linewidths=0.75,
contour_linestyles="solid",
contours_label_color="black",
critical_label_color="red",
unadjusted_estimate_marker="D",
unadjusted_estimate_color="black",
adjusted_estimate_marker="^",
adjusted_estimate_color="red",
legend_position=(1.6, 0.6),
):
"""
Plots and summarizes the sensitivity bounds as a contour plot, as they vary with the partial R^2 of the unobserved confounder(s) with the treatment and the outcome
Two types of plots can be generated, based on adjusted estimates or adjusted t-values
X-axis: Partial R^2 of treatment and unobserved confounder(s)
Y-axis: Partial R^2 of outcome and unobserved confounder(s)
We also plot bounds on the partial R^2 of the unobserved confounders obtained from observed covariates
:param plot_type: "estimate" or "t-value"
:param critical_value: special reference value of the estimate or t-value that will be highlighted in the plot
:param x_limit: plot's maximum x_axis value (default = 0.8)
:param y_limit: plot's minimum y_axis value (default = 0.8)
:param num_points_per_contour: number of points to calculate and plot each contour line (default = 200)
:param plot_size: tuple denoting the size of the plot (default = (7,7))
:param contours_color: color of contour line (default = blue)
String or array. If array, lines will be plotted with the specific color in ascending order.
:param critical_contour_color: color of threshold line (default = red)
:param label_fontsize: fontsize for labelling contours (default = 9)
:param contour_linewidths: linewidths for contours (default = 0.75)
:param contour_linestyles: linestyles for contours (default = "solid")
See : https://matplotlib.org/3.5.0/gallery/lines_bars_and_markers/linestyles.html for more examples
:param contours_label_color: color of contour line label (default = black)
:param critical_label_color: color of threshold line label (default = red)
:param unadjusted_estimate_marker: marker type for unadjusted estimate in the plot (default = 'D')
See: https://matplotlib.org/stable/api/markers_api.html
:parm unadjusted_estimate_color: marker color for unadjusted estimate in the plot (default = "black")
:param adjusted_estimate_marker: marker type for bias adjusted estimates in the plot (default = '^')
:parm adjusted_estimate_color: marker color for bias adjusted estimates in the plot (default = "red")
:param legend_position:tuple denoting the position of the legend (default = (1.6, 0.6))
"""
# Plotting the contour plot
if plot_type == "estimate":
critical_value = 0 # default value of estimate
else:
critical_value = 2 # default t-value (usual approx for 95% CI)
fig, ax = plt.subplots(1, 1, figsize=plot_size)
ax.set_title("Sensitivity contour plot of %s" % plot_type)
ax.set_xlabel("Partial R^2 of confounder with treatment")
ax.set_ylabel("Partial R^2 of confounder with outcome")
for i in range(len(self.r2tu_w)):
x = self.r2tu_w[i]
y = self.r2yu_tw[i]
if x > 0.8 or y > 0.8:
x_limit = 0.99
y_limit = 0.99
break
r2tu_w = np.arange(0.0, x_limit, x_limit / num_points_per_contour)
r2yu_tw = np.arange(0.0, y_limit, y_limit / num_points_per_contour)
unadjusted_point_estimate = None
if plot_type == "estimate":
contour_values, critical_value, bound_values = self.plot_estimate(r2tu_w, r2yu_tw)
unadjusted_estimate = self.estimate
unadjusted_point_estimate = unadjusted_estimate
elif plot_type == "t-value":
contour_values, critical_value, bound_values = self.plot_t(r2tu_w, r2yu_tw)
unadjusted_t = self.t_stats
unadjusted_point_estimate = unadjusted_t
else:
raise ValueError("Current plotting method only supports 'estimate' and 't-value' ")
# Adding contours
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=contours_color,
linewidths=contour_linewidths,
linestyles=contour_linestyles,
)
ax.clabel(contour_plot, inline=1, fontsize=label_fontsize, colors=contours_label_color)
# Adding threshold contour line
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=critical_contour_color,
linewidths=contour_linewidths,
levels=[critical_value],
)
ax.clabel(contour_plot, [critical_value], inline=1, fontsize=label_fontsize, colors=critical_label_color)
# Adding unadjusted point estimate
ax.scatter(
[0],
[0],
marker=unadjusted_estimate_marker,
color=unadjusted_estimate_color,
label="Unadjusted({:1.2f})".format(unadjusted_point_estimate),
)
# Adding bounds to partial R^2 values for given strength of confounders
for i in range(len(self.frac_strength_treatment)):
frac_strength_treatment = self.frac_strength_treatment[i]
frac_strength_outcome = self.frac_strength_outcome[i]
if frac_strength_treatment == frac_strength_outcome:
signs = str(round(frac_strength_treatment, 2))
else:
signs = str(round(frac_strength_treatment, 2)) + "/" + str(round(frac_strength_outcome, 2))
label = (
str(i + 1)
+ " "
+ signs
+ " X "
+ str(self.original_benchmark_covariates)
+ " ({:1.2f}) ".format(bound_values[i])
)
ax.scatter(
self.r2tu_w[i],
self.r2yu_tw[i],
color=adjusted_estimate_color,
marker=adjusted_estimate_marker,
label=label,
)
ax.annotate(str(i + 1), (self.r2tu_w[i] + 0.005, self.r2yu_tw[i] + 0.005))
ax.legend(bbox_to_anchor=legend_position)
plt.show()
def __str__(self):
s = "Sensitivity Analysis to Unobserved Confounding using R^2 paramterization\n\n"
s += "Unadjusted Estimates of Treatment {0} :\n".format(self.original_treatment_name)
s += "Coefficient Estimate : {0}\n".format(self.estimate)
s += "Degree of Freedom : {0}\n".format(self.degree_of_freedom)
s += "Standard Error : {0}\n".format(self.standard_error)
s += "t-value : {0}\n".format(self.t_stats)
s += "F^2 value : {0}\n\n".format(self.partial_f2)
s += "Sensitivity Statistics : \n"
s += "Partial R2 of treatment with outcome : {0}\n".format(self.stats["r2yt_w"])
s += "Robustness Value : {0}\n\n".format(self.stats["robustness_value"])
s += "Interpretation of results :\n"
s += "Any confounder explaining less than {0}% percent of the residual variance of both the treatment and the outcome would not be strong enough to explain away the observed effect i.e bring down the estimate to 0 \n\n".format(
round(self.stats["robustness_value"] * 100, 2)
)
s += "For a significance level of {0}%, any confounder explaining more than {1}% percent of the residual variance of both the treatment and the outcome would be strong enough to make the estimated effect not 'statistically significant'\n\n".format(
self.significance_level * 100, round(self.stats["robustness_value_alpha"] * 100, 2)
)
s += "If confounders explained 100% of the residual variance of the outcome, they would need to explain at least {0}% of the residual variance of the treatment to bring down the estimated effect to 0\n".format(
round(self.stats["r2yt_w"] * 100, 2)
)
return s
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | Why were r2tu_w and r2yu_tw swapped in the inner/outer loops here? It looks like this was a bug in the prior code? | emrekiciman | 383 |